2017
DOI: 10.1109/tetc.2017.2673548
|View full text |Cite
|
Sign up to set email alerts
|

A Power-Aware Digital Multilayer Perceptron Accelerator with On-Chip Training Based on Approximate Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…Our HW accelerator can be adopted to train other Multi-Layer Perception (MLP) neural networks that are employed in real-time applications on embedded devices. Some prior works [14], [23], [24] exist in literature to accelerate the training of the MLP or fully-connected neural networks. Approximate computing is adopted in [23] via inexact multipliers and bit-precision reduction to reduce the power consumption.…”
Section: Relu and Dropout Storagementioning
confidence: 99%
See 1 more Smart Citation
“…Our HW accelerator can be adopted to train other Multi-Layer Perception (MLP) neural networks that are employed in real-time applications on embedded devices. Some prior works [14], [23], [24] exist in literature to accelerate the training of the MLP or fully-connected neural networks. Approximate computing is adopted in [23] via inexact multipliers and bit-precision reduction to reduce the power consumption.…”
Section: Relu and Dropout Storagementioning
confidence: 99%
“…Some prior works [14], [23], [24] exist in literature to accelerate the training of the MLP or fully-connected neural networks. Approximate computing is adopted in [23] via inexact multipliers and bit-precision reduction to reduce the power consumption. The synapses that have lesser impact on the final error are obtained from the training phase and approximated by the inexact multipliers.…”
Section: Relu and Dropout Storagementioning
confidence: 99%
“…al. [11] to identify the weights that have higher gradient with respect to loss and select them for reduced precision or higher compression. We extend their approach to consider shielding those weights with high gradient from process variations and errors.…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…As modern DNN models are heavily over parameterized, "protecting" a small fraction of the important parameters can significantly improve robustness against process variation. Such unequal protection of DNN weights have been explored for quantization and compression of weights [11,12]. These methods primarily used heuristics based on gradients as an indicator to determine which weights must be protected.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, more efforts should be dedicated on reducing the hardware complexity and power consumption of the DSP system to deal with embedding constraints. This goal can be pursued by using specific design methods such as approximate computing techniques [36,37,38]. Exploiting inexact arithmetic circuits for SVD implementation would improve the system efficiency by decreasing the power consumption and hardware resources.…”
Section: Classification Study Based On Fpga Implementationmentioning
confidence: 99%