2020
DOI: 10.1038/s41467-020-17236-y
|View full text |Cite
|
Sign up to set email alerts
|

A solution to the learning dilemma for recurrent networks of spiking neurons

Abstract: Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
602
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 364 publications
(607 citation statements)
references
References 43 publications
5
602
0
Order By: Relevance
“…But, such learning is very slow in tasks that require large or deep networks because a global signal provides very limited information to neurons deep in the hierarchy [23][24][25]. Thus, an outstanding question is ( Fig.…”
Section: Introductionmentioning
confidence: 99%
“…But, such learning is very slow in tasks that require large or deep networks because a global signal provides very limited information to neurons deep in the hierarchy [23][24][25]. Thus, an outstanding question is ( Fig.…”
Section: Introductionmentioning
confidence: 99%
“…We hypothesize that evolutionary processes may have evolved circuitry for some very important tasks and that further plasticity may be used to fine-tune the circuitry. Alternatively, recently proposed biologically plausible variants to backpropagation may be interesting in this context [30]- [32]. It remains to be tested however whether these variants are powerful enough to train H-Mem networks.…”
Section: Discussionmentioning
confidence: 99%
“…Recently however Bellec et al [3] have come up with the idea of e-prop(agation) that enables network learning through gradient descent. This exciting development recognizes two key factors hitherto ignored in RSNN research:…”
Section: The Weight Transport Problem and Credit Assignmentmentioning
confidence: 99%
“…Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways [2]-one of the most exciting being the creation of a specific hierarchical learning paradigm in Recurrent SNN (RSNNs) called e-prop [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model then the union between DL and RSNN can be made in a more harmonious manner.…”
mentioning
confidence: 99%