2020
DOI: 10.3389/fnins.2020.00653
|View full text |Cite
|
Sign up to set email alerts
|

Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization

Abstract: Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications. In recent years, there have been several proposals focused on supervised (conversion, spike-based gradient descent) and unsupervised (spike timing dependent plasticity) training methods to improve the accuracy of SNNs on large-scale tasks. However, each of these methods suffer from scalability, latency, and accuracy limitations. In this paper, we propose novel algorithmic techniques of modifyin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
64
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 93 publications
(65 citation statements)
references
References 48 publications
1
64
0
Order By: Relevance
“…Taking inspiration from the extremely efficient brain, spiking neural networks (SNNs) [22] combine binary valued activations (spikes) with asynchronous and sparse communication. Such SNNs are arguably also more hardware friendly [10] and energy-efficient [28]. However, compared to ANNs, the development of SNNs is in its early phase.…”
Section: Introductionmentioning
confidence: 99%
“…Taking inspiration from the extremely efficient brain, spiking neural networks (SNNs) [22] combine binary valued activations (spikes) with asynchronous and sparse communication. Such SNNs are arguably also more hardware friendly [10] and energy-efficient [28]. However, compared to ANNs, the development of SNNs is in its early phase.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, it is well-known that the event-driven nature of SNN assists in reducing network activation which in turn reduced energy dissipation of computation. Using method presented in Panda et al ( 2020 ), we compute the energy advantage of H-SNN over DNN baselines during inference as follows: 1.0 for H-SNN, 1.55 for 3D CNN-α, and 3.37 for 3D ShuffleNetV2; 3D MobileNetV2, 3D CNN-β, and CNN+LSTM consumes 9.01×, 11.80× and 28.88× higher energy than H-SNN, respectively. BP-SNN and BP-SNN-LS uses similar energy as H-SNN.…”
Section: Resultsmentioning
confidence: 99%
“…With increasing preponderance in event-based algorithms, data-driven approaches are mainly categorized into two classes: asynchronous spiking neural networks [46,49,52,54] and standard learning architectures [7,40,65,68]. As bio-inspired methods, SNNs offer asynchronous inferences at a fraction of power consumption but suffer from the vanishing spike phenomenon [76] in deeper layers. In contrast, the latter methods commonly trade-off efficiency for accuracy and generalize well in complex vision tasks, sacrificing the inherent sparsity of spatio-temporal events.…”
Section: Data-driven Approachesmentioning
confidence: 99%
“…Basically, existing event-based algorithms either refer to frame-based computer vision techniques or take inspiration from biological systems. However, pure bioinspired SNN architectures suffer from the vanishing spike phenomenon [76] in deeper layers and the lack of specific hardware. As for standard learning architectures designed for frame-based vision tasks, the reliance of intermediate image-like representation usually leads to the loss of asynchronous property and redundant computation.…”
Section: Algorithmsmentioning
confidence: 99%