2022
DOI: 10.48550/arxiv.2202.07221
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Navigating Local Minima in Quantized Spiking Neural Networks

Abstract: Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms. However, these networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds. The broadly accepted trick to overcoming this is through the use of biased gradient estimators: surrogate gradients which approximate thresholding in Spiking Neural Networks (SNNs), and Straight-Through Estimators… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…As shown in Refs. [15], [25], [26], SNNs are highly tolerant to weight quantization. In the extreme, the trainable weights of an SNN can be binarized to w ∈ {−1, +1} with small performance degradation as performance is 'propped up' by non-binarized variables, including membrane potential and time.…”
Section: Neuron Modelmentioning
confidence: 99%
“…As shown in Refs. [15], [25], [26], SNNs are highly tolerant to weight quantization. In the extreme, the trainable weights of an SNN can be binarized to w ∈ {−1, +1} with small performance degradation as performance is 'propped up' by non-binarized variables, including membrane potential and time.…”
Section: Neuron Modelmentioning
confidence: 99%