2022
DOI: 10.1109/tbcas.2022.3181808
|View full text |Cite
|
Sign up to set email alerts
|

Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…Pan et al [42] presented a bit-serial early termination for convolutional neural networks (CNNs). Choi et al [43] presented early termination for spiking neural network (SNN) processors. Although our propositions are in the same spirit, they have been applied in a different context of frequency transforms of neural tensors.…”
Section: Predictive Early Termination By Exploiting Output Sparsitymentioning
confidence: 99%
“…Pan et al [42] presented a bit-serial early termination for convolutional neural networks (CNNs). Choi et al [43] presented early termination for spiking neural network (SNN) processors. Although our propositions are in the same spirit, they have been applied in a different context of frequency transforms of neural tensors.…”
Section: Predictive Early Termination By Exploiting Output Sparsitymentioning
confidence: 99%
“…A conservative estimate of the energy required to train the network is calculated to be 1.6 μJ per training image by assuming the lowest resistance (and thus highest current) for each memristive weight in the network. Although more expensive per training image than some advanced training algorithms that have an energy consumption of 71.3 nJ per image ( Choi et al, 2022 ), the more compact network and quicker learning rate of the dynamical memristive neural network results in an ultralow total training energy of 94 mJ before convergence. Conventional neural networks that use backpropagation require hundreds of epochs to train a network towards convergence, resulting in a total energy consumption of 1.7–6.4 J depending on the emphasis placed on energy efficiency.…”
Section: Stdp-enabled Physical Neural Network With Lifelong Self-lear...mentioning
confidence: 99%