2020 57th ACM/IEEE Design Automation Conference (DAC) 2020
DOI: 10.1109/dac18072.2020.9218576
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Quantize Deep Neural Networks: A Competitive-Collaborative Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…We chose an SGD optimizer with an initial learning rate of 0.1 for all datasets. We reduce the learning rate by a factor of 0.1 at [100, 150] epochs for CIFAR-10, [30,60,90] for both ImageNet datasets. For CIFAR-100, we reduce the learning rate by 0.2 at [60,120,160] epochs.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We chose an SGD optimizer with an initial learning rate of 0.1 for all datasets. We reduce the learning rate by a factor of 0.1 at [100, 150] epochs for CIFAR-10, [30,60,90] for both ImageNet datasets. For CIFAR-100, we reduce the learning rate by 0.2 at [60,120,160] epochs.…”
Section: Resultsmentioning
confidence: 99%
“…However, the benefits of these achievements are limited in resourceconstrained systems such as mobile devices, low power robots, etc. Different model compression algorithms [4,17,18,23,25,30,31] have been proposed to reduce the complexity of such larger models for these systems. Among different compression algorithms, distilling the knowledge from a larger model to a smaller one has shown to be 1.…”
Section: Introductionmentioning
confidence: 99%
“…In [49], [56], deciding the appropriate bit-precision level is manual and laborious so as to maintain accuracy. Thereafter, automated algorithms are designed that can discover the appropriate quantization level for each datastructure with accuracy in mind [57], [58].…”
Section: Bit-precision Multiply Accumulatementioning
confidence: 99%
“…CCQ [58] performs stages of competition and collaboration to gradually adapt weight's wordlength. The competition stage is carried out to measure the effect of quantizing randomly chosen layers to next bit-precision level on accuracy and memory.…”
Section: Mixed-precision Quantizationmentioning
confidence: 99%
See 1 more Smart Citation