2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00452
|View full text |Cite
|
Sign up to set email alerts
|

SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks

Abstract: Inference for state-of-the-art deep neural networks is computationally expensive, making them difficult to deploy on constrained hardware environments. An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook. For very low-precisions, such as binary or ternary networks with 1-8-bit activations, the information loss from quantization leads to significant accuracy degradation due to large … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
92
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 114 publications
(93 citation statements)
references
References 16 publications
0
92
0
1
Order By: Relevance
“…Our approach could also work to improve the results for models quantized with such custom floating point formats. Other approaches use codebooks [7], which put stringent restrictions on the hardware for an efficient implementation. We do not consider codebooks in our approach.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Our approach could also work to improve the results for models quantized with such custom floating point formats. Other approaches use codebooks [7], which put stringent restrictions on the hardware for an efficient implementation. We do not consider codebooks in our approach.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The work of Faraone et.al. groups parameters in training process and gradually quantizes each group with optimized scaling factor to minimize the quantization error [77].…”
Section: Minimize the Quantization Errormentioning
confidence: 99%
“…As shown in Table 4, our method is constantly better than the baseline method and scheme-2 is better than scheme-1. From Figure 6: Comparison of validation errors of our two schemes based on DoReFa-Net [48] (left) and SYQ [7] (right). The decay function is the cosine decay and decay step is set to 50 epochs.…”
Section: Scheme-1 Vs Scheme-2mentioning
confidence: 99%
“…From Table 5, we could find that the gap of performance we improved becomes more and more invisible as the model size increases. Specifically, our method can improve the baseline accuracy of 0.125× network by 1.31% to 1.96% while merely raises the performance of 1.0× network by Table 5: Validation accuracies (%) for four networks of different sizes with the baseline method (SYQ [7]) and our method on SVHN dataset. The "W/A" values are the bits for quantizing weights/activations.…”
Section: Model Sizementioning
confidence: 99%