2020
DOI: 10.1016/j.microrel.2020.113969
|View full text |Cite
|
Sign up to set email alerts
|

Soft errors in DNN accelerators: A comprehensive review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(23 citation statements)
references
References 51 publications
0
23
0
Order By: Relevance
“…Interestingly, we found out the same results with our methodology as shown in next sections. In [27,28], the authors evaluate the reliability of one CNN executed on three different GPU architectures (Kepler, Maxwell, and Pascal).…”
Section: Related Workmentioning
confidence: 99%
“…Interestingly, we found out the same results with our methodology as shown in next sections. In [27,28], the authors evaluate the reliability of one CNN executed on three different GPU architectures (Kepler, Maxwell, and Pascal).…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, there are some reports (for example references [ 137 , 138 , 139 ]) suggesting that some internal redundancy of computations embedded in AI models will suppress the negative effects of soft errors to some extent. Despite this, it is difficult to predict the levels of such error suppression with any degree of reliability.…”
Section: Resultsmentioning
confidence: 99%
“…YOLOv3 was selected as the method for deep learning using SqueezeNet as a feature extractor. The rectified linear unit (ReLU) function used in fire modules, kept the original SqueezeNet activation settings [19], [29]. The fully connected (FC) layers will follow the leaky ReLU function.…”
Section: Methods 31 Proposed Approach In Deep Action Learningmentioning
confidence: 99%