Proceedings of the 35th International Conference on Computer-Aided Design 2016
DOI: 10.1145/2966986.2967021
|View full text |Cite
|
Sign up to set email alerts
|

Design of power-efficient approximate multipliers for approximate artificial neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
97
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 130 publications
(97 citation statements)
references
References 14 publications
0
97
0
Order By: Relevance
“…The chip area is estimated as the sum of the sizes of the gates of the circuit, which are given as one of the inputs of ADAC. The chip area is typically a good estimate of the power consumption [3,14,20,22]. The output of ADAC (in the gate-level Verilog format) can be passed to industrial circuit design tools to obtain accurate circuit parameters for the target technology.…”
Section: Architecture and Implementationmentioning
confidence: 99%
See 3 more Smart Citations
“…The chip area is estimated as the sum of the sizes of the gates of the circuit, which are given as one of the inputs of ADAC. The chip area is typically a good estimate of the power consumption [3,14,20,22]. The output of ADAC (in the gate-level Verilog format) can be passed to industrial circuit design tools to obtain accurate circuit parameters for the target technology.…”
Section: Architecture and Implementationmentioning
confidence: 99%
“…Besides the approaches mentioned above, there also exist general-purpose methods, such as SALSA [14] or SASIMI [15], approximating circuits independently of their structure. We were unable to perform a direct comparison with them due to their implementation is not available, but based on the published results, ADAC is able to provide a significantly better scalability.…”
Section: Evaluation Related Work and Applicationsmentioning
confidence: 99%
See 2 more Smart Citations
“…In DNNs, approximations were introduced at levels of the data type quanti- zation, microarchitecture (e.g. neurons insignificantly contributing to the quality of outputs can be removed), training algorithm (an iterative process which can be stopped when good enough results are obtained), the multiply-accumulatetransform circuits (where the design of approximate multipliers and adders for DNN applications represents an independent topic [15], [16]), and memory cells and architecture (where, e.g., less significant bits can be stored in energy efficient, but less reliable memory cells [17]). An ultralow power deep learning ASIC for IoT was implemented on a single chip, capable of performing 374 GOPS/W and consuming less than 300 µW.…”
Section: Approximate Circuits For Image and Video Processingmentioning
confidence: 99%