2021 IEEE 32nd International Conference on Microelectronics (MIEL) 2021
DOI: 10.1109/miel52794.2021.9569094
|View full text |Cite
|
Sign up to set email alerts
|

Fault Resilience Analysis of Quantized Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…For each BER, the fault injection simulations were performed at least 1000 times, and every simulation time was 24,550 clock cycles, which are the cycles needed by a complete classification task. Table 3 shows the corresponding error probabilities including the total number of serious errors and system crashes, and energy efficiency, which is defined by Equation (10).…”
Section: Comparisons Of Design Overhead and Fault Tolerancementioning
confidence: 99%
See 1 more Smart Citation
“…For each BER, the fault injection simulations were performed at least 1000 times, and every simulation time was 24,550 clock cycles, which are the cycles needed by a complete classification task. Table 3 shows the corresponding error probabilities including the total number of serious errors and system crashes, and energy efficiency, which is defined by Equation (10).…”
Section: Comparisons Of Design Overhead and Fault Tolerancementioning
confidence: 99%
“…Refs. [10,16] utilized Quant, SMART, TMR, and SMART+TMR, respectively, to harden the MLP, and the error probability was decreased by 39.96%, 43.84%, 53.97%, and 55.52%, respectively.…”
Section: Comparisons Of Design Overhead and Fault Tolerancementioning
confidence: 99%
“…For instance, consider a weight parameter value of a DNN model represented as a floating-point 32 (FP32) number. A fault in the most significant exponent bit of the FP32 number can substantially change the value of the DNN's parameter and dramatically decrease the accuracy [36]. The majority of studies have only considered fault injection in the weights of the neural network.…”
Section: ) Fault Injection At Rtl Levelmentioning
confidence: 99%
“…If not masked, this fault could propagate through the DNN network and drastically decrease the accuracy. The impact of a fault also causes a deviation in FxP numbers, which leads to a decrease in accuracy, but the overall impact would be less due to the less dynamic range of the FxP numbers [39]. Therefore it is crucial to define a data type and bit-width, which can fulfill the requirement of accuracy and reliability, and hardware resources.…”
Section: Factors Affecting the Resiliency Of Deep Neural Networkmentioning
confidence: 99%