2021
DOI: 10.1038/s41598-021-91786-z
|View full text |Cite
|
Sign up to set email alerts
|

Event-based backpropagation can compute exact gradients for spiking neural networks

Abstract: Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 65 publications
(35 citation statements)
references
References 52 publications
0
35
0
Order By: Relevance
“…In contrast to the time-to-first spike approach (Göltz et al, 2021), this does not require explicit or analytical knowledge of the function t ⋆ (x, p) and is also applicable to more complex neuron models. In the context of spiking neural networks, this was recognized by Wunderlich and Pehle (2020) and elaborated in full generality by Pehle (2021). Concurrent work also introduced this technique to the wider machine learning community (Chen et al, 2021).…”
Section: A Principled Approach To Gradient-based Parameter Optimizati...mentioning
confidence: 93%
See 1 more Smart Citation
“…In contrast to the time-to-first spike approach (Göltz et al, 2021), this does not require explicit or analytical knowledge of the function t ⋆ (x, p) and is also applicable to more complex neuron models. In the context of spiking neural networks, this was recognized by Wunderlich and Pehle (2020) and elaborated in full generality by Pehle (2021). Concurrent work also introduced this technique to the wider machine learning community (Chen et al, 2021).…”
Section: A Principled Approach To Gradient-based Parameter Optimizati...mentioning
confidence: 93%
“…A similar argument to the one made above for the parameter derivative of the transition times allows one to then relate the adjoint state variables after λ + to the adjoint state variables before the transition λ − and yields an event-based rule for gradient accumulation of the parameters that only enter the transition equations (in particular the synaptic weights). This is elaborated more explicitly in Wunderlich and Pehle ( 2020 ) and Pehle ( 2021 ). The event-based nature of the gradient accumulation and the sparse propagation of error information has immediate consequences for neuromorphic hardware.…”
Section: A Principled Approach To Gradient-based Parameter Optimizati...mentioning
confidence: 98%
“…Numerous techniques for directly training SNNs have been developed. Notably, back-propagation can be applied to exact spike times [14][15][16] or the non-differentiable transfer function of each spiking neuron can be replaced with a 'surrogate gradient' function, allowing back-propagation through time [17,18] or more biologically-plausible learning rules [19][20][21] to be applied to spiking models. However, none of these techniques are yet capable of training deep feedforward SNNs.…”
Section: Introductionmentioning
confidence: 99%
“…In the third approach, known as latency learning, the idea is to define the neuron activity as a function of its firing time [19,20,21,22,23,24,25]. In other words, neurons fire at most once and stronger outputs correspond to shorter spike delays.…”
Section: Introductionmentioning
confidence: 99%