2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00348
|View full text |Cite
|
Sign up to set email alerts
|

Low-bit Quantization Needs Good Distribution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…Based on this, Rueckauer et al ( 2017 ) and Lu and Sengupta ( 2020 ) suggest that a particular percentile from the histogram on the CNN activation may improve conversion efficiency. One step further, Yu et al ( 2020 ) suggest that a good distribution with less outliers on CNN activation can be useful for quantisation. Therefore, we call these techniques distribution-aware CNN training techniques, to emphasise that they are mainly focused on optimizing the CNN training through enforcing good distributions on the activations.…”
Section: Conversion Optimized Through Distribution-aware Cnn Trainingmentioning
confidence: 99%
See 3 more Smart Citations
“…Based on this, Rueckauer et al ( 2017 ) and Lu and Sengupta ( 2020 ) suggest that a particular percentile from the histogram on the CNN activation may improve conversion efficiency. One step further, Yu et al ( 2020 ) suggest that a good distribution with less outliers on CNN activation can be useful for quantisation. Therefore, we call these techniques distribution-aware CNN training techniques, to emphasise that they are mainly focused on optimizing the CNN training through enforcing good distributions on the activations.…”
Section: Conversion Optimized Through Distribution-aware Cnn Trainingmentioning
confidence: 99%
“…To show that our ECC method is complementary to the distribution-aware CNN training techniques, we implement some existing CNN training techniques that can affect activation distribution and show that ECC can also work with them to achieve optimized conversion. Specifically, Yu et al ( 2020 ) notice that the clipped ReLU can enforce small activation values (i.e., close to zero) to become greater, and eventually reshape the distribution from a Gaussian-like distribution to a uniform distribution. We follow this observation to train CNN models with different clipping methods, including ReLU6 (clipped by 6) from Lin et al ( 2019 ) and Jacob et al ( 2018 ), ReLU-CM (clipped by k-mean) from Yu et al ( 2020 ), and ReLU-SC (shift and clipped) from Deng and Gu ( 2021 ).…”
Section: Conversion Optimized Through Distribution-aware Cnn Trainingmentioning
confidence: 99%
See 2 more Smart Citations