2006
DOI: 10.1017/s0140525x06309028
|View full text |Cite
|
Sign up to set email alerts
|

Vector symbolic architectures are a viable alternative for Jackendoff's challenges

Abstract: The authors, on the basis of brief arguments, have dismissed tensor networks as a viable response to Jackendoff's challenges. However, there are reasons to believe that connectionist approaches descended from tensor networks are actually very well suited to answering Jackendoff's challenges. I rebut their arguments for dismissing tensor networks and briefly compare the approaches.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
36
1

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(38 citation statements)
references
References 28 publications
1
36
1
Order By: Relevance
“…One of the fundamental aims of neuroscience is to comprehend how the brain manages to process signals from the outside world into a coherent informational universe that allows us to survive and prosper. In human cognition, the existence of language adds a new and novel codification strategy, one in which conceptual constructions are mapped onto words (Gayler 2006).…”
Section: Discussionmentioning
confidence: 99%
“…One of the fundamental aims of neuroscience is to comprehend how the brain manages to process signals from the outside world into a coherent informational universe that allows us to survive and prosper. In human cognition, the existence of language adds a new and novel codification strategy, one in which conceptual constructions are mapped onto words (Gayler 2006).…”
Section: Discussionmentioning
confidence: 99%
“…The vector components can be binary, ternary, real, or complex values [82,83]. In our experiments, we use 32,000 dimensional binary vectors constructed in accordance with the Binary Spatter Code (BSC) [84], one of a family of representational approaches known as Vector Symbolic Architectures (VSAs) [8487]. This dimensionality was selected based on the results of simulation experiments in previous research [88], which suggest that at this dimensionality around 2000 unique elemental vectors can be superposed with low probability of the superposed product being closer to some other elemental vector in the space than its component vectors.…”
Section: Methodsmentioning
confidence: 99%
“…Binding (⊗) is a compositional operation that is provided by VSAs, such as the BSC [86,87]. Binding two elemental vectors generates a third vector, which is dissimilar from these two component vectors.…”
Section: Methodsmentioning
confidence: 99%
“…Treves, 2005), does not advance our understanding of the computational question, how can compositional structures be processed at the neuronal level. Various exciting proposals have been put forward (Pulvermü ller, 2002;van der Velde and de Kamps, 2006;Gayler, 2003Gayler, , 2006Eliasmith and Thagard, 2001;Hashimoto, 2008), often as neuronal mechanisms for identifying syntactic structure, but they do not yet satisfactorily solve the problem of matching semantic with syntactic representations, and thus leave open the question of how to match meanings with neural representations. Again, the core issue is what neural mechanisms can facilitate compositional interpretation, and it appears necessary to address it using model systems of sufficient complexity, where solutions can be imagined to scale up to real-life problems.…”
Section: Compositionality Can Only Be Analysed In Sufficiently Large mentioning
confidence: 99%