Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 2017 2017
DOI: 10.23919/date.2017.7927038
|View full text |Cite
|
Sign up to set email alerts
|

DVAFS: Trading computational accuracy for energy through dynamic-voltage-accuracy-frequency-scaling

Abstract: Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing tec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(33 citation statements)
references
References 24 publications
0
33
0
Order By: Relevance
“…Let assume the same carry error (C i = C err ) in a propagating stage (P i = 1, else there would be no carry-chain perturbation). If another cut-back happens to guess the same faulty carry C err , it does not disrupt the normal propagation and the previous result holds (2). But if the carry cut happens in the opposite direction C err , it overrides (2) and reverses the carry error: the carry, that was false until now, comes back to the value of the expected addition.…”
Section: E Worst-case Error and Floating-point Precision 1) Error Prmentioning
confidence: 93%
See 1 more Smart Citation
“…Let assume the same carry error (C i = C err ) in a propagating stage (P i = 1, else there would be no carry-chain perturbation). If another cut-back happens to guess the same faulty carry C err , it does not disrupt the normal propagation and the previous result holds (2). But if the carry cut happens in the opposite direction C err , it overrides (2) and reverses the carry error: the carry, that was false until now, comes back to the value of the expected addition.…”
Section: E Worst-case Error and Floating-point Precision 1) Error Prmentioning
confidence: 93%
“…Approximate computing has been investigated at different levels of abstraction, such as voltage-frequency-precision scaling at circuit level [2] or significance-driven computation at algorithmic level [3]. Another way consists in redesigning the architecture of digital circuits into an approximate version with smaller delay, silicon area or power consumption.…”
Section: Introductionmentioning
confidence: 99%
“…One of the most popular approaches to obtain more energy efficient inference for neural networks is through custom hardware accelerators, targeting field-programmable gate arrays (FPGAs) [15,19,39] or application-specific integrated circuit (ASICs) [3,6,28,40]. These are custom-built architectures that optimize the most energy-intensive operations involved in the inference process (typically multiply-and-accumulate loops).…”
Section: Custom Hardware Designsmentioning
confidence: 99%
“…Different types of dynamic inference optimizations have been proposed in literature. Some researchers have developed "big/little" systems in which two NNs of different sizes and complexities are used depending on the input complexity [5,8,11,28]. Conditional [24] and hierarchical/staged [25][26][27] inference are other effective forms of dynamic optimization.…”
Section: Introductionmentioning
confidence: 99%
“…In order to obtain different levels of approximation, two main approaches are followed. The first approach include voltage over-scaling methods such as [15] where over-scaling can vastly decrease quality by its impact on MSB. The second approach propose transistor level or gate level design and implementation methods.…”
Section: Background a Approximate Computing Circuitsmentioning
confidence: 99%