2021
DOI: 10.1145/3448980
|View full text |Cite
|
Sign up to set email alerts
|

Improving Power of DSP and CNN Hardware Accelerators Using Approximate Floating-point Multipliers

Abstract: Approximate computing has emerged as a promising design alternative for delivering power-efficient systems and circuits by exploiting the inherent error resiliency of numerous applications. The current article aims to tackle the increased hardware cost of floating-point multiplication units, which prohibits their usage in embedded computing. We introduce AFMU (Approximate Floating-point MUltiplier), an area/power-efficient family of multipliers, which apply two approximation techniques in the resource-hungry m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…However, the high bit width and large computation volume brought by floating-point calculations still pose a challenge for onchip deployment. Currently, introducing approximate computing into floating point could significantly enhance energy efficiency [28,29]. Approximate computing can substantially accelerate the speed of operations and reduce the complexity of circuit.…”
Section: Discussionmentioning
confidence: 99%
“…However, the high bit width and large computation volume brought by floating-point calculations still pose a challenge for onchip deployment. Currently, introducing approximate computing into floating point could significantly enhance energy efficiency [28,29]. Approximate computing can substantially accelerate the speed of operations and reduce the complexity of circuit.…”
Section: Discussionmentioning
confidence: 99%
“…In the field of approximate circuit design, approximation methods are applied at component level, e.g., adders and multipliers [10][11][12][13][14][15][16][17][18][19], as well as to bigger accelerators [20][21][22][23]. Most of these works focus on logic simplification techniques, namely, aim at reducing the circuit complexity of the designs by pruning circuit nodes or using inexact building blocks.…”
Section: Circuit Approximation Techniquesmentioning
confidence: 99%
“…At the beginning of the 21st century, the rapid increase in the capacity and design size of FPGAs, the integration of digital signal processing modules, and the rich internal multiplication and accumulation units capable of performing a large number of multiplication and addition operations in convolution [6] . In 2019, Leon et al [7] and Huang et al [8] reduced the parameters of the network model by about six times by using an FPGA platform for YOLO networks. In recent years, better performance has been pursued to accommodate deep learning algorithms.…”
Section: Introductionmentioning
confidence: 99%