2020
DOI: 10.1109/access.2020.3005286
|View full text |Cite
|
Sign up to set email alerts
|

ALigN: A Highly Accurate Adaptive Layerwise Log_2_Lead Quantization of Pre-Trained Neural Networks

Abstract: Deep Neural Networks are one of the machine learning techniques which are increasingly used in a variety of applications. However, the significantly high memory and computation demands of deep neural networks often limit their deployment on embedded systems. Many recent works have considered this problem by proposing different types of data quantization schemes. However, most of these techniques either require post-quantization retraining of deep neural networks or bear a significant loss in output accuracy. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…For example, for the quantization of pre-trained DNNs, [9], [28]- [30] have proposed different schemes. The techniques presented in [29], [30] have focused on the utilization of logarithmic data representations to avoid the computationally expensive multiplication operations. However, some recent works, such as [31]- [33] have utilized fixed-point quantization schemes to employ the well-explored high-performance and energy-efficient ap-proximate adders and multipliers.…”
Section: Arithmetic Hardware For Ann Inferencementioning
confidence: 99%
“…For example, for the quantization of pre-trained DNNs, [9], [28]- [30] have proposed different schemes. The techniques presented in [29], [30] have focused on the utilization of logarithmic data representations to avoid the computationally expensive multiplication operations. However, some recent works, such as [31]- [33] have utilized fixed-point quantization schemes to employ the well-explored high-performance and energy-efficient ap-proximate adders and multipliers.…”
Section: Arithmetic Hardware For Ann Inferencementioning
confidence: 99%