2020 IEEE International Symposium on Circuits and Systems (ISCAS) 2020
DOI: 10.1109/iscas45731.2020.9180918
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Conversion Method for Spiking Neural Network using Median Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 12 publications
0
14
0
Order By: Relevance
“…However, it still brings a great drop in accuracy on some large-scale datasets like CIFAR-10, and usually needs many (hundreds or thousands, even more on ImageNet [48] ) time steps to achieve satisfactory performances. On the other way, the works [28,30,31,[33][34][35]37,40] choose to optimize firing threshold while keeping the weight parameters of networks unchanged, to strive for the same correlation effect. An appropriate threshold can be obtained by similar activation normalization methods [27,29] or directly computed from the related network variables of target ANN counterparts.…”
Section: Existing Conversion Methods For Snnsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, it still brings a great drop in accuracy on some large-scale datasets like CIFAR-10, and usually needs many (hundreds or thousands, even more on ImageNet [48] ) time steps to achieve satisfactory performances. On the other way, the works [28,30,31,[33][34][35]37,40] choose to optimize firing threshold while keeping the weight parameters of networks unchanged, to strive for the same correlation effect. An appropriate threshold can be obtained by similar activation normalization methods [27,29] or directly computed from the related network variables of target ANN counterparts.…”
Section: Existing Conversion Methods For Snnsmentioning
confidence: 99%
“…An appropriate threshold can be obtained by similar activation normalization methods [27,29] or directly computed from the related network variables of target ANN counterparts. [30,35,37,40] However, this layer-bylayer threshold scaling is more coarse grained and couldn't be well compatible with widely used elements like BN [41] or deeper architectures. [1,39,49] Based on previous works, [28,34] different quantization methods are adopted to optimize both of the weights and thresholds and achieve more faster and highaccuracy performances.…”
Section: Existing Conversion Methods For Snnsmentioning
confidence: 99%
See 1 more Smart Citation
“…To ensure that the accuracy of the converted SNN is close to that of the ANN, two approaches are commonly used: 1) Adjusting the ANN properties before training to account for SNN limitations and 2) reducing approximation errors of the SNN during conversion. Examples for the first approach include removing the bias, changing the activation to ReLU [10], clipping ANN activations to the range [0, 1] [11], and adopting quantized ReLU in ANNs [12], [13]. Falling into the second category, different weight normalization methods have been proposed to reduce approximation errors caused by the limited dynamic range of SNN neurons.…”
Section: Introductionmentioning
confidence: 99%
“…For this purpose, we first introduce an adjustable quantized algorithm in ANN training to minimize the spike approximation errors, which are commonly existed in ANN-to-SNN conversion and propose a scatter-and-gather conversion mechanism for SNNs. This work is based on our previous algorithm (Zou et al, 2020 ) and hardware (Kuang et al, 2021 ), and we extend it by (a) testing its robustness on input noise and larger dataset (CIFAR-100), (b) developing a incremental mapping framework to carry out an efficient network deployment on a typical crossbar-based neuromorphic chip, (c) detailed power and speed analyses are given to show its excellent application potential. All together, the main contributions of this article are summarized as follows:…”
Section: Introductionmentioning
confidence: 99%