2015
DOI: 10.1063/1.4912881
|View full text |Cite
|
Sign up to set email alerts
|

Error analysis in the hardware neural networks applications using reduced floating-point numbers representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Therefore, we can use approximate computing units with reduced power consumption to replace the traditional standard computing units adopted in DNNs. In our previous work [10], [11] and [13], we have proposed three digital approximate multiplication unit architectures to reduce the DNN computing power consumption. The approximate multiplication units can be dynamically reconfigured and adaptive to different accuracy requirements.…”
Section: B Approximate Computing For Dnnsmentioning
confidence: 99%
“…Therefore, we can use approximate computing units with reduced power consumption to replace the traditional standard computing units adopted in DNNs. In our previous work [10], [11] and [13], we have proposed three digital approximate multiplication unit architectures to reduce the DNN computing power consumption. The approximate multiplication units can be dynamically reconfigured and adaptive to different accuracy requirements.…”
Section: B Approximate Computing For Dnnsmentioning
confidence: 99%
“…Therefore, for the BWN, the hardware need to process more calculations, which in turn causes extra power consumption. In summary, for low-power speech recognition systems, there are three advantages to quantize the data and weight bit width of the DNNs: firstly, it can effectively reduce the memory size and the data/weight access power consumption [14]; secondly, the reduced data/weight bit width can also effectively reduce the hardware resources and power consumption of the computing units [15]; thirdly, for the voltage-domain analog computing circuit, the analog noise mismatches can also be reduced. For example, 6-bit data can be encoded within 64 (2 6 ) voltage values, while 16-bit data requires 65536 (2 16 ) voltage values for encoding.…”
Section: Preliminaries a Network Optimization Approaches For Low Powe...mentioning
confidence: 99%
“…RNNs have been proven to be naturally fault tolerant, and calculation accuracy requirements for various application scenarios are also in large variations [11,12]. Thus, an Energy-Efficient Reconfigurable Architecture (E-ERA) is proposed, including reconfigurable approximate computing arrays with low energy cost and high processing performance, and self-adaptive approximate computing approach to monitoring and dynamically adjusting the precision of computing.…”
Section: Architectures Of E-era For Rnnsmentioning
confidence: 99%