2022 IEEE 40th VLSI Test Symposium (VTS) 2022
DOI: 10.1109/vts52500.2021.9794239
|View full text |Cite
|
Sign up to set email alerts
|

Special Session: Fault-Tolerant Deep Learning: A Hierarchical Perspective

Abstract: The complexity of learning problems, such as Generative Adversarial Network (GAN) and its variants, multi-task and meta-learning, hyper-parameter learning, and a variety of real-world vision applications, demands a deeper understanding of their underlying coupling mechanisms. Existing approaches often address these problems in isolation, lacking a unified perspective that can reveal commonalities and enable effective solutions. Therefore, in this work, we proposed a new framework, named Learning with Constrain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 127 publications
0
4
0
Order By: Relevance
“…It covered resilience threats related to transient, permanent, and TEs induced by process variation, memory refresh rate scaling, voltage scaling, and thermal effects. The study of [11] proposes a survey on fault-tolerant deep learning with a hierarchical approach studying proposed methods from the model layer, architecture layer, circuit layer, and cross-layer views. Moreover, [12] reviews the reliability issues of DNN accelerators considering soft errors.…”
Section: A Papers With Similar Backgroundmentioning
confidence: 99%
“…It covered resilience threats related to transient, permanent, and TEs induced by process variation, memory refresh rate scaling, voltage scaling, and thermal effects. The study of [11] proposes a survey on fault-tolerant deep learning with a hierarchical approach studying proposed methods from the model layer, architecture layer, circuit layer, and cross-layer views. Moreover, [12] reviews the reliability issues of DNN accelerators considering soft errors.…”
Section: A Papers With Similar Backgroundmentioning
confidence: 99%
“…Some of the reliability evaluations explored the reliability difference among components of neural networks such that they can be utilized to perform selective protection with less protection overhead [43]- [45]. More neural network reliability evaluation works can be found in recent surveys [46] [47].…”
Section: B Reliability Analysis Of Neural Networkmentioning
confidence: 99%
“…Fault simulation is key to understand the influence of hardware faults on the neural network processing and is the basis for fault-tolerant neural network model and accelerator designs [18] [12] [19] [4] [10] in various application scenarios. For instance, fault simulations in [36] [33] are utilized to investigate the vulnerability of neural networks and accelerators, which enables selectively hardware protection against various hardware faults with minimum overhead.…”
Section: Related Workmentioning
confidence: 99%
“…The reliability of neural network accelerators that are increasingly utilized for their competitive advantages in terms of performance and energy efficiency [30] [29] becomes critical to these applications, and must be eval- With the continuously shrinking semiconductor feature sizes and growing transistor density, the influence of soft errors on large-scale chip designs becomes inevitable [5] [31]. A variety of analysis work have been conducted to investigate the influence of soft errors on neural network execution reliability from distinct angles recently [27] [14] [34] [35] [36] [32] [13] [20] [6] [12] [18]. For instance, Brandon Reagen et al [27] investigated the relationship between fault error rate and model accuracy from the perspective of models, layers, and structures.…”
Section: Introductionmentioning
confidence: 99%