2018
DOI: 10.7554/elife.37124
|View full text |Cite
|
Sign up to set email alerts
|

Learning recurrent dynamics in spiking networks

Abstract: Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(39 citation statements)
references
References 56 publications
0
39
0
Order By: Relevance
“…Whereas in some approaches the credit assignment problem is tackled by relying on coding assumptions variably linked to optimality criteria, target-based approaches, both in the context of feed-forward [20] and recurrent models, provide a straightforward solution. As shown above as well as in a recent work [27], it is not essential for the teacher network to be a rate model, as long as it effectively acts as a dynamic reservoir that expands task dimensionality via its recurrency, therefore proving rich targets.…”
Section: Discussionmentioning
confidence: 92%
See 2 more Smart Citations
“…Whereas in some approaches the credit assignment problem is tackled by relying on coding assumptions variably linked to optimality criteria, target-based approaches, both in the context of feed-forward [20] and recurrent models, provide a straightforward solution. As shown above as well as in a recent work [27], it is not essential for the teacher network to be a rate model, as long as it effectively acts as a dynamic reservoir that expands task dimensionality via its recurrency, therefore proving rich targets.…”
Section: Discussionmentioning
confidence: 92%
“…Balanced networks can also be trained to produce prescribed chaotic dynamics (like the Lorenz attractor in Fig 6A) or multiple complex quasi-periodic trajectories. In another task, inspired by the work of Laje, and Buonomano [15] in rate networks, and similar to recent extensions to the QIF spiking case in [27], we trained a spiking network to reproduce a desired transient dynamics in response to an external stimulus. To do so, we recorded innate current trajectories generated by a randomly initialized LIF balanced network for a short period of time (2 sec) during its spontaneous activity.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead, some of them were relying on control theory to train a chaotic reservoir of spiking neurons 32 – 34 . Others used the FORCE algorithm 35 , 36 or variants of it 35 , 37 39 . However, the FORCE algorithm was not argued to be biologically realistic, as the plasticity rule for each synaptic weight requires knowledge of the current values of all other synaptic weights.…”
Section: Discussionmentioning
confidence: 99%
“…The relationship between connectivity and firing rates in recurrent spiking networks can be mathematically difficult to derive, which can make it difficult to derive gradient based methods for training recurrent spiking networks (though some studies have succeeded, see for example [48,49]). The piecewise linearity of firing rates in the semi-balanced state (see Eq (4)) could simplify the training of recurrent spiking networks because the gradient of the firing rate with respect to the weights can be easily computed.…”
Section: Plos Computational Biologymentioning
confidence: 99%