2021 22nd International Symposium on Quality Electronic Design (ISQED) 2021
DOI: 10.1109/isqed51717.2021.9424345
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Fault-Energy Trade-offs in Approximate DNN Hardware Accelerators

Abstract: Systolic array-based deep neural network (DNN) accelerators have recently gained prominence for their low computational cost. However, their high energy consumption poses a bottleneck to their deployment in energy-constrained devices. To address this problem, approximate computing can be employed at the cost of some tolerable accuracy loss. However, such small accuracy variations may increase the sensitivity of DNNs towards undesired subtle disturbances, such as permanent faults. The impact of permanent faults… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…The FPGA implementation is experimented with using this method by achieving minimum power consumption than GPU counterparts. Siddique et al [9] designed an Evoapprox8b signed multipliers which is energy efficient with bit-wise fault resilience and extensive layer-wise. The result shows that the energy efficiency obtained a fault resilience was orthogonal.…”
Section: Related Workmentioning
confidence: 99%
“…The FPGA implementation is experimented with using this method by achieving minimum power consumption than GPU counterparts. Siddique et al [9] designed an Evoapprox8b signed multipliers which is energy efficient with bit-wise fault resilience and extensive layer-wise. The result shows that the energy efficiency obtained a fault resilience was orthogonal.…”
Section: Related Workmentioning
confidence: 99%
“…Recent efforts to this end include software solutions such as model replication [9] and error prediction coding [7], and hardware solutions such as approximation [12] and redundant mapping [20]. For FPGA-based neuromorphic designs, fault tolerance can also be addressed using periodic scrubbing [11,19].…”
Section: Introductionmentioning
confidence: 99%
“…As discussed in this paper, the permanent faults affect the compute units of AxDNN accelerators in every execution cycle and their presence as unmasked faults leads to serious failures in the whole system. Indeed, their impact is stronger in AxDNN accelerators due to self-error-inducing approximate computations compared to the accurate design alternatives, i.e., accurate deep neural network (AccDNN) accelerators [15]. However, the ratio of fault resilience in AxDNNs to that of AccDNNs depends on data precision, location and type of permanent faults, size of accelerators, degree of approximations, activation functions, neural network topology, and accelerators.…”
mentioning
confidence: 99%
“…Condia et al studied the effects of faults in critical and user-hidden modules (such as the Warp Scheduler and the Pipeline Registers) for the convolution computations in AccDNNs over GPU [23]. Very recently, our previous work in [15] explored the fault resilience of AxDNNs with their energy trade-offs running on a 256x256 approximate systolic array-based DNN accelerator that is analogous to Google TPU. However, the analysis presented in [15] has several limitations: (i) the analysis is limited to simple approximate multi-layer perceptrons, (ii) only layer-wise permanent faults on TPU-based AxDNNs are analyzed, and non-layer-wise analysis is missing, (iii) only systolic-array based architecture is analyzed and the impact of permanent faults on GPU-based accelerators is ignored, and (iv) the analysis does not provide any insights on the mitigation of the permanent faults.…”
mentioning
confidence: 99%
See 1 more Smart Citation