2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) 2018
DOI: 10.1109/aspdac.2018.8297391
|View full text |Cite
|
Sign up to set email alerts
|

Low-power implementation of Mitchell's approximate logarithmic multiplication for convolutional neural networks

Abstract: This paper analyzes the effects of approximate multiplication when performing inferences on deep convolutional neural networks (CNNs). The approximate multiplication can reduce the cost of underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators. The study identifies the critical factors in the convolution, fully-connected, and batch normalization layers that allow more accurate CNN predictions despite the errors from approximate multiplication. The same factors als… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 40 publications
0
15
0
Order By: Relevance
“…One of the solutions is approximate FP arithmetic [11]- [15], which is based on the fact that there is no need to waste energy to reach 100% accuracy as it is not always required [16]. Many applications in multimedia and ML fields have adopted this concept [16], [17].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the solutions is approximate FP arithmetic [11]- [15], which is based on the fact that there is no need to waste energy to reach 100% accuracy as it is not always required [16]. Many applications in multimedia and ML fields have adopted this concept [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…In many applications, multiplication is one of the most frequently used but expensive operations. Therefore, there have been a lot of studies on custom hardware accelerators of approximate multipliers [11]- [15], [17]- [19]. For FP multiplication, a popular approach is to approximate mantissa multiplication [11]- [13].…”
Section: Introductionmentioning
confidence: 99%
“…The approximate logarithmic multiplier replaced the significand multiplication by adding input fractions [12]- [17]. It was proved that the approximate logarithmic multiplier could achieve reasonable performances in CNN inference [14]- [17] with pre-trained models. Several existing works focused on the unbiased design of the approximate dynamic ranged multiplier and its digital signal processing (DSP) applications [11], [18].…”
Section: Introductionmentioning
confidence: 99%
“…Currently the efforts have moved from improving the accuracy to lighten the neural networks [13] and accelerate inference times [9] , [14] , which leads to further degrading the performance of the system when many faces need to be analyzed. Moreover, the previously mentioned decrease in performance is associated with an increase in the power consumption, which is very relevant when talking about monitoring applications.…”
Section: Introductionmentioning
confidence: 99%