2021
DOI: 10.48550/arxiv.2107.11746
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Abstract: Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…We assume a 32-bit representation for membrane potential in LIF neurons. Regarding the backward LIF memory of baseline, we consider the standard backpropagation method which stores membrane potential across entire timesteps (Liang et al, 2021;Singh et al, 2022;Yin et al, 2022).…”
Section: Experiments Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…We assume a 32-bit representation for membrane potential in LIF neurons. Regarding the backward LIF memory of baseline, we consider the standard backpropagation method which stores membrane potential across entire timesteps (Liang et al, 2021;Singh et al, 2022;Yin et al, 2022).…”
Section: Experiments Implementation Detailsmentioning
confidence: 99%
“…Each LIF neuron needs to store membrane potential to make gradients flow back, where the training memory increases as the SNN goes deeper and uses larger timesteps. This huge computational graph often is difficult to be trained on the limited GPU memory (Liang et al, 2021;Singh et al, 2022;Yin et al, 2022). In this context, since our architecture shares the membrane potential across all layers, we can compute each layer's membrane potential from the next layer's membrane potential real-time during backward step.…”
Section: Introductionmentioning
confidence: 99%
“…Compared to standard ANNs, SNNs require significantly higher computational cost for training due to multiple feedforward steps [53]. This makes it difficult to search for an optimal SNN architecture with NAS techniques that train the architecture candidate multiple times [2,78,[107][108][109] or train a complex supernet [8,29,55,86].…”
Section: Nas Without Trainingmentioning
confidence: 99%
“…For the first question, we highlight that the mainstream NAS algorithms either require multiple training stages [2,78,[107][108][109] or require training a supernet once with all architecture candidates [8,29,55,86] which takes longer training time to converge than standard training. As SNNs have a significantly slower training process compared to ANNs (e.g., training SNN on MNIST with NVIDIA V100 GPU takes 11.43× more latency compared to the same ANN architecture [53]), the above NAS approaches are difficult to be applied on SNNs. On the other hand, recent works [12,58,88] have proposed efficient NAS approaches that search the best neuron cell from initialized networks without any training.…”
Section: Introductionmentioning
confidence: 99%
“…One main source of energy efficiency for neuromorphic computing is the spike-based convolution, which means that the convolutional layer receives and processes the binary spiking input. Specialized optimization, for example the look up table, can be consequently applied to further boost its efficiency on neuromorphic devices (Liang et al 2021). Once LIF (•) is removed, the CONV at the top of the next block will receive the continuous input rather than binary spikes, causing difficulty in benefiting from spike-based convolution and rich input/output sparsity.…”
Section: Design Criteriamentioning
confidence: 99%