2013
DOI: 10.1162/neco_a_00409
|View full text |Cite
|
Sign up to set email alerts
|

Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks

Abstract: Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

11
438
0
1

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 404 publications
(450 citation statements)
references
References 20 publications
11
438
0
1
Order By: Relevance
“…We then ‘reversed engineered’ the model 33 to discover its mechanism of selective integration. The global features of the model activity are easily explained by the overall arrangement of fixed points of the dynamics 33 (Fig.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We then ‘reversed engineered’ the model 33 to discover its mechanism of selective integration. The global features of the model activity are easily explained by the overall arrangement of fixed points of the dynamics 33 (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…The global features of the model activity are easily explained by the overall arrangement of fixed points of the dynamics 33 (Fig. 5), which result from the synaptic connectivity learned during training.…”
Section: Resultsmentioning
confidence: 99%
“…There are also others highlighting that ANN systems should not be seen as inexplicable models any more (I. I. Baskin, Palyulin, & Zefirov, 2009;Sussillo & Barak, 2013) since a number of methodologies facilitating the interpretation of model outcomes have been developed (I. Baskin, Ait, Halberstam, Palyulin, & Zefirov, 2002;Burden & Winkler, 1999;Guha, Stanton, & Jurs, 2005).…”
Section: Support Vector Machines (Svm)mentioning
confidence: 99%
“…These vectors are restricted to the potentially low-dimensional subspace spanned by the right eigenvectors of GN(2) with corresponding eigenvalues that have a real part greater than 1. Thus, although the network dynamics are chaotic, they are confined to a low-dimensional space, which has been suggested as a mechanism that could make computation in the network more robust [22]. …”
Section: Introductionmentioning
confidence: 99%