2022
DOI: 10.48550/arxiv.2202.11946
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting

Abstract: Recently, brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics. Still, it is difficult to efficiently train deep SNNs due to the nondifferentiability of its activation function, which disables the typically used gradient descent approaches for traditional artificial neural networks (ANNs). Although the adoption of surrogate gradient (SG) formally allows for the back-propagation of losses, the discrete spikin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
2

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(41 citation statements)
references
References 14 publications
0
39
0
2
Order By: Relevance
“…TdBN, BNTT [48], [49] improve direct training by adapting batch normalization layers in SNNs. TET [50] studies the choice of the loss function to provide better convergence in SNNs. [51] employs neural architecture search tailored for SNN.…”
Section: Direct Training Snnmentioning
confidence: 99%
“…TdBN, BNTT [48], [49] improve direct training by adapting batch normalization layers in SNNs. TET [50] studies the choice of the loss function to provide better convergence in SNNs. [51] employs neural architecture search tailored for SNN.…”
Section: Direct Training Snnmentioning
confidence: 99%
“…Specifically, Swin Transformer ( 17) is used as the baseline and the ANN-to-SNN conversion algorithm is implemented as the training method. Next, we validate the effectiveness of our model and compare our model with other state-of-the-art methods, including RMP (18), RNL (19), QCFS (20), SNNC-AP (21), Hybrid (22), tdBN (23), TET (24) and DSR (5), for image classification tasks on Cifar-10 (25), Cifar-100 (26), and ImageNet datasets (27). Further experiments have been conducted to check the performance of our model with ultralow time steps.…”
Section: Resultsmentioning
confidence: 99%
“…) as suggested by . Now, denote the overall spiking neural network as a function f T (x), its forward propagation can be formulated as (Deng et al, 2022) or by conversion (Bu et al, 2022b).…”
Section: Spiking Neural Networkmentioning
confidence: 99%