2021
DOI: 10.48550/arxiv.2106.06984
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural Networks Calibration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(22 citation statements)
references
References 28 publications
0
22
0
Order By: Relevance
“…Spike-timing-dependent plasticity (STDP) reinforces or punishes the neuronal connection based on the spike history [5,6,36,41,90,97]. Also, a line of work [20,30,31,52,72,73,98] approximate ReLU with LIF by converting pre-trained ANNs to SNNs using weight or threshold balancing.…”
Section: Spiking Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Spike-timing-dependent plasticity (STDP) reinforces or punishes the neuronal connection based on the spike history [5,6,36,41,90,97]. Also, a line of work [20,30,31,52,72,73,98] approximate ReLU with LIF by converting pre-trained ANNs to SNNs using weight or threshold balancing.…”
Section: Spiking Neural Networkmentioning
confidence: 99%
“…Finally, we compare the memory and computational efficiency between SNASNet and previous works [31,52,68] in Table 5. In the table, we also compare SNASNet-Fw-APx and SNASNet-Bw-APx where x is the kernel size of AvgPooling layer for the vectorize block (we use x=2 in our default setting).…”
Section: Memory and Computational Efficiencymentioning
confidence: 99%
“…However, the key disadvantage of DNN-to-SNN conversion is that it yields SNNs with much higher latency compared to other techniques. Some previous research [16], [24] proposed to down-scale the threshold term to train low-latency SNNs, but the scaling factor was either a hyperparameter or obtained via linear grid-search, and the latency needed for convergence still remained large (>64).…”
Section: B Dnn-to-snn Conversionmentioning
confidence: 99%
“…To further reduce the conversion error, [15] minimized the difference between the DNN and SNN post-activation values for each layer. To do this, the activation function of the IF SNN must first be derived [15], [16]. We assume that the initial membrane potential of a layer l (U l (0)) is 0.…”
Section: B Dnn-to-snn Conversionmentioning
confidence: 99%
See 1 more Smart Citation