2005
DOI: 10.1016/j.neunet.2005.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Linear recursive distributed representations

Abstract: Connectionist networks have been criticized for their inability to represent complex structures with systematicity. That is, while they can be trained to represent and manipulate complex objects made of several constituents, they generally fail to generalize to novel combinations of the same constituents. This paper presents a modification of Pollack's Recursive Auto-Associative Memory (RAAM), that addresses this criticism. The network uses linear units and is trained with Oja's rule, in which it generalizes P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…Indeed, we observe that the system is able to extract structural regularities inherent in the corpora, and then use this information to generalize to new constructions. In related work, the ability of recurrent networks to accommodate this form of grammatical generalization have been observed [44], [45], [62][64].…”
Section: Discussionmentioning
confidence: 96%
“…Indeed, we observe that the system is able to extract structural regularities inherent in the corpora, and then use this information to generalize to new constructions. In related work, the ability of recurrent networks to accommodate this form of grammatical generalization have been observed [44], [45], [62][64].…”
Section: Discussionmentioning
confidence: 96%
“…Each extracted phrase is processed by the construction model, and then the remainder of the original sequence is processed. Rather than a stack with potentially deep nesting, the current approach relies on a working memory to hold the main phrase while the embedded phrase (which is guaranteed to have no further embedded structure) is being processed, and both stack (Miikkulainen, 1996;Voegtlin & Dominey, 2005) and our working memory have feasible neural implementations. We thus resolve this parsing problem without the use of a distinct stack system, and indeed can process the 49 sentence Miikkulainen corpus in English and a Japanese translation.…”
Section: Extraction Of Relative Phrase Structurementioning
confidence: 99%
“…With sufficiently large corpora the system displays a significant capability to generalize to new constructions, exploiting the regularities that define grammatical well formedness (Hinaut and Dominey, 2013). This argues that a system can display the ability to learn the grammatical structure implicit in a corpus without explicitly representing the grammar (Elman, 1991), and that it can generalize to accommodate new constructions that are consistent with that grammar (Voegtlin and Dominey, 2005). Part of the novelty in the position suggested here is that this recurrent network and readout system is implemented in the primate brain in the cortico-striatal system.…”
Section: Discussionmentioning
confidence: 99%