2018
DOI: 10.1101/297424
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning recurrent dynamics in spiking networks

Abstract: Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
29
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(30 citation statements)
references
References 64 publications
(177 reference statements)
1
29
0
Order By: Relevance
“…Instead, some of them were relying on control theory to train a chaotic reservoir of spiking neurons [32][33][34] . Others used the FORCE algorithm 35,36 or variants of it 35,[37][38][39] . However, the FORCE algorithm was not argued to be biologically realistic, as the plasticity rule for each synaptic weight requires knowledge of the current values of all other synaptic weights.…”
Section: Discussionmentioning
confidence: 99%
“…Instead, some of them were relying on control theory to train a chaotic reservoir of spiking neurons [32][33][34] . Others used the FORCE algorithm 35,36 or variants of it 35,[37][38][39] . However, the FORCE algorithm was not argued to be biologically realistic, as the plasticity rule for each synaptic weight requires knowledge of the current values of all other synaptic weights.…”
Section: Discussionmentioning
confidence: 99%
“…To circumvent this issue, some networks have held fixed inhibitory efficacies [ 64 , 99 101 ]. Others have attempted to obey Dale’s principle by suddenly freezing synaptic connections if the synaptic update reversed the sign of excitatory or inhibitory influences [ 102 ]. In subsequent trainings, synapses attempting to change their signs were excluded and therefore prevented from exhibiting activity-dependent changes.…”
Section: Discussionmentioning
confidence: 99%
“…One such method is based on the first-order reduced and controlled error (FORCE) algorithm previously developed for rate RNNs [6]. The FORCE-based methods are capable of training spiking networks, but training all the parameters including recurrent connections could become computationally inefficient [21][22][23]. Lastly, recent studies successfully converted rate-based networks trained with a gradient-descent method to spiking networks for both convolutional and recurrent neural networks [24,25].…”
Section: Introductionmentioning
confidence: 99%