2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00367
|View full text |Cite
|
Sign up to set email alerts
|

Monte Carlo Gradient Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…By adapting to quantization operation through relearning the weights, the QAT leads to a model quantized to an ultralow bit width maintaining its performance, although it relies on complete training datasets. The main research contents of QAT contain the quantizer design [26], training strategy [27], approximate gradient [28], and binary network [29]. In contrast, PTQ realizes the model quantization employing quite limited data.…”
Section: Related Workmentioning
confidence: 99%
“…By adapting to quantization operation through relearning the weights, the QAT leads to a model quantized to an ultralow bit width maintaining its performance, although it relies on complete training datasets. The main research contents of QAT contain the quantizer design [26], training strategy [27], approximate gradient [28], and binary network [29]. In contrast, PTQ realizes the model quantization employing quite limited data.…”
Section: Related Workmentioning
confidence: 99%