2023
DOI: 10.1109/tcsii.2023.3260701
|View full text |Cite
|
Sign up to set email alerts
|

The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…SCNNs have achieved high precision with few timesteps (Chowdhury et al, 2021 ), with the spatio-temporal backpropagation (STBP) training method (Zhu and Shi, 2018 ), direct input encoding (Wu et al, 2019 ), and re-training strategy (Chowdhury et al, 2021 ). Second, a series of methods have been proposed to compact SCNNs, such as network pruning (Liu et al, 2022 ; Schaefer et al, 2023 ) to increase the sparsity and low-bit quantization (Kheradpisheh et al, 2022 ; Shymyrbay et al, 2022 ) to reduce the computational precision.…”
Section: Introductionmentioning
confidence: 99%
“…SCNNs have achieved high precision with few timesteps (Chowdhury et al, 2021 ), with the spatio-temporal backpropagation (STBP) training method (Zhu and Shi, 2018 ), direct input encoding (Wu et al, 2019 ), and re-training strategy (Chowdhury et al, 2021 ). Second, a series of methods have been proposed to compact SCNNs, such as network pruning (Liu et al, 2022 ; Schaefer et al, 2023 ) to increase the sparsity and low-bit quantization (Kheradpisheh et al, 2022 ; Shymyrbay et al, 2022 ) to reduce the computational precision.…”
Section: Introductionmentioning
confidence: 99%