Proceedings of the 59th ACM/IEEE Design Automation Conference 2022
DOI: 10.1145/3489517.3530457
|View full text |Cite
|
Sign up to set email alerts
|

A time-to-first-spike coding and conversion aware training for energy-efficient deep spiking neural network processor design

Abstract: In this paper, we present an energy-efficient SNN architecture, which can seamlessly run deep spiking neural networks (SNNs) with improved accuracy. First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead. In the proposed CAT, the activation function developed for simulating SNN during ANN training, is efficiently exploited to reduce the data representation error after conversion. Based on the CAT technique, we also present a time-to-fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Most of the ANN2SNN methods are based on rate encoding and need far more time steps, e.g., T ≥ 128, than the surrogate gradient method. Conversion methods based on latency encoding rather than rate coding are also reported (93). Note that recent research has achieved a substantial reduction of T, such as surrogate learning methods (94) with T = 5 and ANN2SNN methods (51,95) with T ≤ 64.…”
Section: High-performance Simulationmentioning
confidence: 99%
“…Most of the ANN2SNN methods are based on rate encoding and need far more time steps, e.g., T ≥ 128, than the surrogate gradient method. Conversion methods based on latency encoding rather than rate coding are also reported (93). Note that recent research has achieved a substantial reduction of T, such as surrogate learning methods (94) with T = 5 and ANN2SNN methods (51,95) with T ≤ 64.…”
Section: High-performance Simulationmentioning
confidence: 99%
“…Although significant power and energy savings can be achieved by using TTFS-based SNNs, TTFS-based SNNs that are constructed by either training from scratch (Mostafa, 2016;Comsa et al, 2020) or converting from the pre-trained ANN (Rueckauer and Liu, 2018;Lew et al, 2022) tend to not perform as well as their ANN counterparts in terms of the classification accuracy. As demonstrated in a recent study (Rueckauer and Liu, 2018), converting from ANNs to TTFS-based SNNs, unfortunately, leads to accumulated approximation errors, which results in significantly lower accuracy in the SNNs than the equivalent ANNs, particularly in larger network architectures.…”
Section: Training Of Ttfs-based Snnsmentioning
confidence: 99%
“…Although there are some temporal-based works that propose solutions with higher accuracy on CIFAR and ImageNet (Park et al, 2020;Stöckl and Maass, 2021), these spiking mechanisms require much more complex hardware design with much higher energy consumption. For example, high accuracy algorithms (Stöckl and Maass, 2021;Lew et al, 2022) require updating neuron potentials dynamically, which leads to complex logic and higher memory access counts. The original TTFS-based SNN algorithm, which is the main workload of this work, is currently unable to achieve comparable accuracy on these complex datasets.…”
Section: Training Networkmentioning
confidence: 99%
“…As benchmarks, pre-trained VGG-16, ResNet-18 and ResNet-34 models on CIFAR-10, CIFAR-100 and ImageNet are directly converted to one-spike SNNs. The major difference between the one-spike SNN and the prior work [24], [25], [32] is that the conversion process does not require any constraints on ANNs or conversion-aware ANN training. In [16], [22], conversion was performed by removing batch normalization layers or bias terms resulting in lower accuracy.…”
Section: A Experimental Setupmentioning
confidence: 99%