2018
DOI: 10.1162/neco_a_01086
|View full text |Cite
|
Sign up to set email alerts
|

SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks

Abstract: A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear volta… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
418
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 469 publications
(426 citation statements)
references
References 67 publications
8
418
0
Order By: Relevance
“…The learning rule called Superspike (Zenke and Ganguli, 2018) was derived by applying RTRL in spiking neural networks without recurrent connections. In the absence of these connections RTRL is practicable and the resulting learning rule uses eligibility traces similar to those arising in e-prop with LIF neurons.…”
Section: Comparison Of E-prop With Other Online Learning Methods Formentioning
confidence: 99%
See 1 more Smart Citation
“…The learning rule called Superspike (Zenke and Ganguli, 2018) was derived by applying RTRL in spiking neural networks without recurrent connections. In the absence of these connections RTRL is practicable and the resulting learning rule uses eligibility traces similar to those arising in e-prop with LIF neurons.…”
Section: Comparison Of E-prop With Other Online Learning Methods Formentioning
confidence: 99%
“…This approach requires non-local communication within the RNN, which we wanted to avoid in e-prop. In contrast to e-prop, none of the papers above (Zenke and Ganguli, 2018;Murray, 2019;Roth et al, 2019) derived a theory or a definition of eligibility traces that can be applied to neuron models with a non-trivial internal dynamics, such as adaptive neurons or LSTM units, that appear to be essential for solving tasks with demanding temporal credit assignment of errors.…”
Section: Comparison Of E-prop With Other Online Learning Methods Formentioning
confidence: 99%
“…The non-differentiable nature of spiking dynamics makes it hard to design energy-based learning models involving neural variables. Neuromorphic algorithms currently work around this problem in different ways, including mapping deep neural nets to spiking networks through rate-based techniques [43,44], formulating loss functions that penalize the difference between actual and desired spike-times [45,46], or approximating the derivatives of spike signals through various means [47,48,49]. However, formulating the spiking dynamics of the entire network using an energy function involving neural state variables across the network would enable us to directly use the energy function itself for learning weight parameters; and forms the basis for our future work.…”
Section: Resultsmentioning
confidence: 99%
“…This rule is the basis of the superspike algorithm proposed by Zenke and Ganguli (2017) and can learn to recognize and generate arbitrary patterns of spikes. It consists of three factors: a modulatory component error=Ls, a pre-synaptic component ( ϵ ∗ s j ), and a post-synaptic component ρ(u).…”
Section: Distilling Machine Learning and Neuroscience For Neuromorphimentioning
confidence: 99%