2019 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2019
DOI: 10.1109/robio49542.2019.8961780
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Training with Simulated Approximate Multipliers

Abstract: This paper presents by simulation how approximate multipliers can be utilized to enhance the training performance of convolutional neural networks (CNNs). Approximate multipliers have significantly better performance in terms of speed, power, and area compared to exact multipliers. However, approximate multipliers have an inaccuracy which is defined in terms of the Mean Relative Error (MRE). To assess the applicability of approximate multipliers in enhancing CNN training performance, a simulation for the impac… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…Such circuit-level techniques can be integrated into the designs at the other two levels. Approximate multipliers have already been successfully applied in error-tolerant tasks, such as filtering, image processing, and machine learning [14,15,[37][38][39][59][60][61][62][63][64][65][66][67][68][69][70][71][72]. In 1985, Ashtaputre et al designed a systolic array with approximate multipliers and demonstrated that the inaccurate multiplication can bring negligible impact on the results [59].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such circuit-level techniques can be integrated into the designs at the other two levels. Approximate multipliers have already been successfully applied in error-tolerant tasks, such as filtering, image processing, and machine learning [14,15,[37][38][39][59][60][61][62][63][64][65][66][67][68][69][70][71][72]. In 1985, Ashtaputre et al designed a systolic array with approximate multipliers and demonstrated that the inaccurate multiplication can bring negligible impact on the results [59].…”
Section: Introductionmentioning
confidence: 99%
“…Hammad et al replaced the original full-precision multipliers in VGG networks with approximate ones to support the classification tasks on CIFAR-10 and CIFAR-100, which again showed negligible accuracy losses [68]. After that, they further proposed to deploy approximate multipliers in training to improve the performance [70].…”
Section: Introductionmentioning
confidence: 99%
“…The utilization of approximate multipliers in the hardware design of convolutional neural networks (CNNs) has been proposed previously to enhance the performance in terms of power, speed, and area [23][24][25][26][27][28]. Moreover, using a reconfigurable approximate multiplier based on calculating the error variance was proposed in [29].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, using a reconfigurable approximate multiplier based on calculating the error variance was proposed in [29]. Lower precision approximate multipliers can achieve higher performance gains as can be seen in [23][24][25][26][27]. However, this performance enhancement has a cost of a drop in the CNN accuracy which is inversely proportional to the precision.…”
Section: Introductionmentioning
confidence: 99%
“…Introduction: Approximate computing can be applied in error-resilient applications to reduce power, area, and delay [1][2][3][4][5][6][7][8][9][10]. Multiplication is a fundamental high-energy operation in image processing and deep learning applications [11][12][13][14]. Prior works have explored different techniques to reduce the cost of multiplication using approximate multipliers.…”
mentioning
confidence: 99%