2023
DOI: 10.48550/arxiv.2302.14311
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

Abstract: Spiking Neural Networks (SNNs) are promising energyefficient models for neuromorphic computing. For training the non-differentiable SNN models, the backpropagation through time (BPTT) with surrogate gradients (SG) method has achieved high performance. However, this method suffers from considerable memory cost and training time during training. In this paper, we propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency compared with BPT… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 58 publications
0
2
0
Order By: Relevance
“…Even though Spikformer (Zhou et al 2023) uses a more complex transformer structure and data augmentation, the accuracy of our SSNN is still 5.08% higher than that of Spikformer. When the average timestep is increased to 8, SSNN achieves an accuracy of 78.57%, surpassing SLTT (Meng et al 2023)…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…Even though Spikformer (Zhou et al 2023) uses a more complex transformer structure and data augmentation, the accuracy of our SSNN is still 5.08% higher than that of Spikformer. When the average timestep is increased to 8, SSNN achieves an accuracy of 78.57%, surpassing SLTT (Meng et al 2023)…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…This observation has been presented in the previous work Chowdhury et al (2021b); Li et al (2023b,a) where they show SNN can work with very low timestep 1 ∼ 2. Note that the approaches to memory reduction proposed by other works, such as those reducing simulation time step Chowdhury et al (2021b) and reducing SNN time dependence Meng et al (2023), can be combined with our layer/channel-wise sharing technique. This would lead to an even more significant decrease in memory usage, demonstrating the compatibility and potential of our method when integrated with other optimization strategies.…”
Section: Performance Comparisonmentioning
confidence: 99%