2021
DOI: 10.1109/tsusc.2020.3004980
|View full text |Cite
|
Sign up to set email alerts
|

Design and Analysis of Energy-Efficient Dynamic Range Approximate Logarithmic Multipliers for Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(20 citation statements)
references
References 35 publications
0
20
0
Order By: Relevance
“…In general, the LEE corresponds to the successive accumulation of logarithms of a squared signal. To do so, a multiplier for obtaining the squared signal is used, while the logarithm is estimated using Mitchell’s algorithm [ 68 ] and logarithm properties. The -Log_2 block initiates the process through the signal STR and the signal that indicates the end of the process is RDY.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In general, the LEE corresponds to the successive accumulation of logarithms of a squared signal. To do so, a multiplier for obtaining the squared signal is used, while the logarithm is estimated using Mitchell’s algorithm [ 68 ] and logarithm properties. The -Log_2 block initiates the process through the signal STR and the signal that indicates the end of the process is RDY.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…We analyzed and compared the hardware performance of the 16-bit signed fixed-point GEMM and AGEMM units in terms of the power, area, delay, and power delay product (PDP). We compared the GEMM unit using the exact radix-4 multipliers to the AGEMM units employing the logarithmic multipliers DR-ALM5 [ 57 ] and TL16-8/4 [ 61 ], the nonlogarithmic multiplier RAD1024 [ 55 ], and the hybrid HRALM3 multiplier [ 62 ].…”
Section: An Approximate General Matrix Multiply Unitmentioning
confidence: 99%
“…Multiplication is a very common, but expensive operation, with exact multipliers being large circuits that consume a significant amount of energy. Various approximate multipliers have been proposed in recent years [ 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 ]. Many studies reported that approximate multipliers behave well in neural network processing [ 56 , 59 , 60 , 61 , 63 , 64 , 65 ].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations