2019
DOI: 10.3103/s1060992x19020103
|View full text |Cite
|
Sign up to set email alerts
|

Study of Fault Tolerance Methods for Hardware Implementations of Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…a) Fault-aware training: In [213] and [91], it is demonstrated for ANNs and SNNs, respectively, that training with dropout improves the error-resilience. Dropout was originally proposed in [276] to prevent over-fitting and reduce the generalization error on unseen data.…”
Section: Model-basedmentioning
confidence: 99%
“…a) Fault-aware training: In [213] and [91], it is demonstrated for ANNs and SNNs, respectively, that training with dropout improves the error-resilience. Dropout was originally proposed in [276] to prevent over-fitting and reduce the generalization error on unseen data.…”
Section: Model-basedmentioning
confidence: 99%
“…This feature is available in all major DNN frameworks. Some authors [9] have shown that dropout, during training, is effective at improving the fault tolerance of a DNN during inference, however this study was done on an abstract network using a small data-set.…”
Section: Output Stationary Architecture For Fault-tolerant Trainingmentioning
confidence: 99%
“…One modern and compact network, SqueezeNet [10], has been selected as our primary test case, as it is representative of the networks used in embedded applications. Since many, previous fault tolerance studies [7], [9], [11] have used VGG-16 [12] and LeNet-5, we have also included these two networks. VGG-16 is a large network with a huge number of weights, thus it inherently has more redundancy.…”
Section: A Selected Dnnsmentioning
confidence: 99%