2019
DOI: 10.1016/j.mejo.2019.03.011
|View full text |Cite
|
Sign up to set email alerts
|

ARA: Cross-Layer approximate computing framework based reconfigurable architecture for CNNs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…For DNN processing, the power consumption of multiplications is much higher than that of other operations. Therefore, in our previous work [15], we also tried to replace most multiplication operations with addition operations in the convolution layers. This approach can significantly reduce the energy consumption of multiplica-tion operations in convolution layers for image recognition applications with low accuracy requirements.…”
Section: B Approximate Computing For Dnnsmentioning
confidence: 99%
“…For DNN processing, the power consumption of multiplications is much higher than that of other operations. Therefore, in our previous work [15], we also tried to replace most multiplication operations with addition operations in the convolution layers. This approach can significantly reduce the energy consumption of multiplica-tion operations in convolution layers for image recognition applications with low accuracy requirements.…”
Section: B Approximate Computing For Dnnsmentioning
confidence: 99%
“…Power is simulated for a convolution of a 256 ✕ 256 image with a 3 ✕ 3 filter. Results show that the proposed memorybased multiplier attains 18.4% and 29.4% reductions in the area and (c) back-to-back placement of bitcells power, respectively, compared with the state-of-the-art memory-based multiplier [4].…”
Section: Memory Arraymentioning
confidence: 99%
“…[24] proposed a uniform affine quantization to generate a quantized convolution kernel and expressed in 8-bit unsigned integer and dynamic range. [41] proposed dynamic layered application of different convolution implementation and quantization on CNN to reduce computational complexity, including the quantization of Winograd convolution. [42] proposed to apply Winograd convolution to an 8-bit network and use learning to solve the problem of accuracy loss.…”
Section: Low Precision and Quantizationmentioning
confidence: 99%
“…[87], [88] used high-efficiency Winograd convolution on IoT devices to achieve high performance. [41], [89], [90] used random calculation and approximate calculation to complete the implementation. [91] is implemented on ReRAM, which improves data reuse based on tiles, and [48] Many frameworks integrate Winograd convolution to improve model execution efficiency.…”
Section: Cpumentioning
confidence: 99%