2019 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2019
DOI: 10.23919/date.2019.8714901
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Quantization of Pre-Trained Neural Networks for Multiplierless Acceleration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…We will elaborate and validate the proposed method in the following sections. In the network quantization area, Vogel et al [36] presented a non-retraining method for quantizing networks. This may be the closest work to our study.…”
Section: Related Workmentioning
confidence: 99%
“…We will elaborate and validate the proposed method in the following sections. In the network quantization area, Vogel et al [36] presented a non-retraining method for quantizing networks. This may be the closest work to our study.…”
Section: Related Workmentioning
confidence: 99%
“…However, most of these techniques require a fine-tuning (retraining) step to reduce the errors induced due to quantization. The authors of [17] have avoided the post-quantization fine-tuning step by computing the quantization step size using an iterative approach. In their proposed technique, the optimal quantization step sizes for features and parameters are computed by iteratively adjusting the step size for each data structure in each layer and recording the generated errors in the layer under consideration.…”
Section: Related Workmentioning
confidence: 99%
“…-bits [17] divides the float32 parameter into two segments i.e., MSBs and LSBs. It then performs the log 2 quantization of the MSBs and LSBs separately and then adds the MSBs-quantized and LSBs-quantized values to achieve the final quantized value.…”
Section: S Leading One Locationmentioning
confidence: 99%
See 1 more Smart Citation
“…A quantization method which specifically targets this problem has been introduced in [94]. Here, parameters and activations are quantized by minimizing the effect of the quantization error δ = χ − χ q in the network.…”
Section: Fixed-point Quantizationmentioning
confidence: 99%