2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2019
DOI: 10.1109/aicas.2019.8771624
|View full text |Cite
|
Sign up to set email alerts
|

Conversion of Synchronous Artificial Neural Network to Asynchronous Spiking Neural Network using sigma-delta quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…Our goal was to optimize the number of operations for spatio-temporal sparsity while performing asynchronous inference. This paper is a continuation of our previous brief paper [36]. In the present paper, in addition to more detailed explanations, we extend our algorithm to use "valued spikes" which results in a smaller number of required events.…”
Section: * Amirreza Yousefzadeh and Mina A Khoei Contributed Equally mentioning
confidence: 81%
“…Our goal was to optimize the number of operations for spatio-temporal sparsity while performing asynchronous inference. This paper is a continuation of our previous brief paper [36]. In the present paper, in addition to more detailed explanations, we extend our algorithm to use "valued spikes" which results in a smaller number of required events.…”
Section: * Amirreza Yousefzadeh and Mina A Khoei Contributed Equally mentioning
confidence: 81%
“…On the contrary, using this low-precision quantization does not harm to the final accuracy, but enables a cheap memory budget on many popular neuromorphic systems such as Akopyan et al ( 2015 ), Davies et al ( 2018 ), and Kuang et al ( 2021 ). More specially, our networks complete simulation for one input sample within only one time step, compared with other conversion methods with dozens even hundreds of simulation time steps (Lee et al, 2016 , 2020 ; Bodo et al, 2017 ; Mostafa et al, 2017 ; Xu et al, 2017 ; Rueckauer and Liu, 2018 ; Wu et al, 2018 ; Yousefzadeh et al, 2019 ).…”
Section: Methodsmentioning
confidence: 99%
“…The MNIST dataset (Lecun and Bottou, 1998) of handwritten digit has been widely applied in image classification field, In our experiments, we use a ternary-valued {-1,0,1} weight quantization as in Li and Liu (2016), not full precision (16 or 32 bits) like many others (Lee et al, 2016(Lee et al, , 2020Bodo et al, 2017;Mostafa et al, 2017;Rueckauer and Liu, 2018;Wu et al, 2018;Yousefzadeh et al, 2019), to facilitate hardware deployment, because we find the weight quantization with more bit-width contributes very little to final accuracy, which is consistent with (Rastegari et al, 2016;Zhou et al, 2016). All convolutional networks are trained using standard ADAM rule (Kingma and Ba, 2014) with an initial learning rate set to 0.001 and 10 times decayed per 200 epochs, based on TensorLayer (Dong et al, 2017), a customized deep learning library.…”
Section: Mnist Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Sigma-delta encoding with discretized deltas has been shown to result in a significant reduction in operation count [18]. Other work has expanded upon this by enabling conversion of regular, pre-trained neural networks to spiking neural networks [7,21]. The addition of thresholding logic to suppress propagation of small deltas reduces computation counts even further [11].…”
Section: Related Workmentioning
confidence: 99%