2018
DOI: 10.3389/fnins.2018.00331
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks

Abstract: Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this mi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
559
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 767 publications
(606 citation statements)
references
References 47 publications
3
559
0
Order By: Relevance
“…Hence, the development of an efficient training algorithm for SNNs is of considerable importance. Much effort has been expended in the past two decades on this issue [10], with the subsequently developed approaches generally characterized as indirect supervised learning (SL), direct SL, or plasticity-based training [10,11]. For the indirect SL method, ANNs are first trained and then mapped to equivalent SNNs by different conversion algorithms that transform real-valued computing into spike-based computing [12,13,14,15,16,17,18,19]; however, this method does not incorporate SNN learning and therefore provides no heuristic information on how to train a SNN.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, the development of an efficient training algorithm for SNNs is of considerable importance. Much effort has been expended in the past two decades on this issue [10], with the subsequently developed approaches generally characterized as indirect supervised learning (SL), direct SL, or plasticity-based training [10,11]. For the indirect SL method, ANNs are first trained and then mapped to equivalent SNNs by different conversion algorithms that transform real-valued computing into spike-based computing [12,13,14,15,16,17,18,19]; however, this method does not incorporate SNN learning and therefore provides no heuristic information on how to train a SNN.…”
Section: Introductionmentioning
confidence: 99%
“…For the indirect SL method, ANNs are first trained and then mapped to equivalent SNNs by different conversion algorithms that transform real-valued computing into spike-based computing [12,13,14,15,16,17,18,19]; however, this method does not incorporate SNN learning and therefore provides no heuristic information on how to train a SNN. The direct SL method is based on the BP algorithm [11,20,21,22,23], e.g., using membrane potentials as continuous variables for calculating errors in BP [20,23] or using continuous activity function to approximate neuronal spike activity and obtain differentiable activ-ity for the BP algorithm [11,22]. However, such research must still perform numerous real-valued computations and non-local communications during the training process; thus, BP-based methods are as potentially energy inefficient as ANNs and also lack bio-plausibility.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the classification performances available in literature for SNNs are for MNIST and CIFAR-10 datasets. The popular methods for SNN training are 'Spike Time Dependent Plasticity (STDP)' based unsupervised learning [7,49,3,42,43] and 'spike-based backpropagation' based supervised learning [24,16,48,30,29]. There are a few works [45,17,46,22] which tried to combine the two approaches to get the best of both worlds.…”
Section: The Classification Performancementioning
confidence: 99%
“…Network Architecture Method Test Accuracy (%) O'Connor (2016) [12] MLP Fractional stochastic gradient descent 97.93 Lee (2017) [8] MLP Backpropagation 98.88 Neftci (2017) [26] MLP Event-driven random backpropagation 97.98 Mostafa (2017) [11] MLP Backpropagation with temporal coding 98.00 Wu (2018) [27] MLP Spatio-Temporal Backpropagation 98.48 Diehl (2015) [17] MLP Conversion of ANNs 98.60 Neil (2016) [ In contrast to the indirect ANN to SNN conversion approach, the proposed learning rule can integrate the inference latency, spike rate and hardware constraints more effectively during the training. Hence, it allows direct deployment to neuromorphic hardware for efficient inference.…”
Section: Modelmentioning
confidence: 99%