2020
DOI: 10.1038/s42256-020-0162-9
|View full text |Cite
|
Sign up to set email alerts
|

An alternative to backpropagation through time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 6 publications
0
10
0
Order By: Relevance
“…Finally, activity-dependent compensation may provide useful techniques for machine learning. For example, we found that performance of a reservoir computing network could be improved if thresholds of individual neurons are initialized to achieve a particular activity probability given the distribution of input activities [66].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, activity-dependent compensation may provide useful techniques for machine learning. For example, we found that performance of a reservoir computing network could be improved if thresholds of individual neurons are initialized to achieve a particular activity probability given the distribution of input activities [66].…”
Section: Discussionmentioning
confidence: 99%
“…As a side note, it is very difficult to imagine how a biological neural network would be able to implement backpropagation through time, and for this alternative, approaches have recently made their appearance. [121] Reservoir computing methods came up with a workaround to the problem of training recurrent networks: they do not train them but instead harness their properties. Common in the approaches of echo state networks and liquid state machines is the idea of using a randomly recurrent network with fixed connectivity, hence no need to resort to backpropagation through time.…”
Section: Future Of Neuromorphic and Bio-inspired Computing Systemsmentioning
confidence: 99%
“…As a side note, it is very difficult to imagine how a biological neural network would be able to implement backpropagation through time, and for this alternative, approaches have recently made their appearance. [ 121 ]…”
Section: Future Of Neuromorphic and Bio‐inspired Computing Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach requires the full history of the network activity to be stored, hence it is quite unlikely that the brain performs such an algorithm for learning. An alternative to the BPTT algorithm that is more biologically realistic is the e-prop learning rule (Bellec et al, 2020), for a short summary see (Manneschi and Vasilaki, 2020). Unlike BPTT, for updating the synaptic weights of the network this learning rule requires only information locally available to each synapse and neuron -an eligibility trace and a learning signal broadcasted across the network.…”
Section: Training Rsnns With E-propmentioning
confidence: 99%