1993
DOI: 10.1080/09540099308915692
|View full text |Cite
|
Sign up to set email alerts
|

Sequence Recognition with Recurrent Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

1993
1993
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…We will assume that the reader is familiar with the design of a Simple Recurrent Network (SRN, Elman, 1990). An RSRN ( Figure 2) works very much like a standard SRN (Maskara & Noetzel, 1992, 1993Cleeremans & Destrebecqz, 1997). Just as the RBP network involves adding "autoassociative" nodes to the output layer of a BP network, a reverberating SRN involves adding "autoassociative" nodes to the output layer of an SRN.…”
Section: Reverberating Simple Recurrent Network (Rsrn)mentioning
confidence: 99%
See 1 more Smart Citation
“…We will assume that the reader is familiar with the design of a Simple Recurrent Network (SRN, Elman, 1990). An RSRN ( Figure 2) works very much like a standard SRN (Maskara & Noetzel, 1992, 1993Cleeremans & Destrebecqz, 1997). Just as the RBP network involves adding "autoassociative" nodes to the output layer of a BP network, a reverberating SRN involves adding "autoassociative" nodes to the output layer of an SRN.…”
Section: Reverberating Simple Recurrent Network (Rsrn)mentioning
confidence: 99%
“…We will use a connectionist architecture using two coupled "auto-associative recurrent networks (AARN)" (Maskara & Noetzel, 1992, 1993Cleeremans & Destrebecqz, 1997) that pass information back and forth to each other by means of pseudopatterns. We will refer to auto-associative recurrent networks as Reverberating SRNs (RSRN), in order to emphasize the manner in which they use pseudopatterns to eliminate catastrophic interference in multiple sequence learning.…”
mentioning
confidence: 99%
“…In order to evaluate whether the failure of the model to account for human learning was particular to some aspect of the SRN's back‐propagation training procedure, we explored the possibility of making a number of changes to the training procedure. In particular, we considered changes which might improve the speed of learning in the model, such as changing the steepness of the sigmoid squashing function (Izui & Pentland, 1990), using a variant of back‐propagation called quick‐prop (Fahlman, 1988), changing the range of random values used to initialize the weights (from 0.001 to 2.0), and related variants of the SRN architecture such as the AARN (Maskara & Noetzel, 1992, 1993) in which the model predicts not only the next response but its current input and hidden unit activations. Under none of these circumstances were we able to find a pattern of learning which approximated human learning.…”
Section: Model‐based Analysesmentioning
confidence: 99%
“…It is possible that the prediction task does not place sufficient demands on these networks to retain information from early time-points (since local information may be sufficient to predict subsequent segments). Alternatively, some more fundamental limitations on the memory capacity of recurrent neural networks for learning long distance dependencies provide the limiting factor on the performance of these systems (see, for instance, Servan-Schrieber, Cleeremans & McClelland, 1991;Maskara & Noetzel, 1993; see Rhode & Plaut, this volume for further discussion).…”
Section: Combining Multiple Cues For Segmentation and Identificationmentioning
confidence: 99%