2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00357
|View full text |Cite
|
Sign up to set email alerts
|

Least squares binary quantization of neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 23 publications
0
21
0
Order By: Relevance
“…Quantization schemes. To show SumMerge robustness, we evaluate performance on six quantization schemes (with different values), across several DNN architectures and datasets [8,12,32,33,39,42]-shown in Table 1. For each entry in the table, we used code from each work's public repository to train and quantize each DNN.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Quantization schemes. To show SumMerge robustness, we evaluate performance on six quantization schemes (with different values), across several DNN architectures and datasets [8,12,32,33,39,42]-shown in Table 1. For each entry in the table, we used code from each work's public repository to train and quantize each DNN.…”
Section: Methodsmentioning
confidence: 99%
“…While each scheme quantizes to a different set of weights, the primary concern for Sum-Merge is the effective value, the resulting weight distribution (Section 3.6) and whether one of the weights is 0 (Section 3.5). For example, TTQ [42] and LS-T [32] quantize to = 3 weights: { , , 0} and {− , , 0}, respectively, where and are positive and negative per-layer weights and is a per-filter weight (Section 3.5). The difference in values between , and does not impact performance, but the fact that each scheme includes the 0-valued weight, and the weight distribution of each layer, does impact performance.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…As Table . 1 shows, ResNet-18 quantized as TNN and TBN have 7.6% and 3.9% higher absolute Top-1 accuracy on ImageNet than binarized ResNet-18 respectively. [33] Ternary/Binary 32× >16× 62.0% 83.6% BNN CI-BCNN [44] Binary (1-bit) 32× 64× 58.1% 80.0%…”
Section: Introductionmentioning
confidence: 99%