2021
DOI: 10.1109/tns.2021.3062014
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Single-Event Upsets on Convolutional Neural Networks in Xilinx Zynq FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
1

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(15 citation statements)
references
References 23 publications
0
14
0
1
Order By: Relevance
“…Closer to this work, several works have proposed methods to apply full or partial TMR to neural network accelerators. For example, Wang et al [6] implemented full TMR on a custom light-weight CNN topology. Their approach significantly improved the error rate (33.59% error rate reduction) although they incur a large increase in hardware resources.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Closer to this work, several works have proposed methods to apply full or partial TMR to neural network accelerators. For example, Wang et al [6] implemented full TMR on a custom light-weight CNN topology. Their approach significantly improved the error rate (33.59% error rate reduction) although they incur a large increase in hardware resources.…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we propose fault tolerant NN accelerators leveraging parallelism using FPGAs for acceleration (as per suggested in [3]), targeting convolutional layers (as [10]), targeting reduced hardware overhead (as per suggested in [4,6]), and applying partial TMR, as [7][8][9], but with a finer grained approach to Selective Hardening or SHIELDeNN, as we analyse and triplicate individual channels within a layer, instead of the entire layer. This work could be considered to be an extension to Gambardella et al [2], developing the ideas proposed in the work into a tool which generates reliable accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…Except for our pioneering works ([14] [19]), this is the only work employing resource-constrained devices in the experiments. The remaining works consider FPGA implementations of ML algorithms [5][12][6] [15][17] [7] or their execution on generic graphics processing units (GPUs) [11], and ML specialised accelerators [13][16] [18]. All works, with exception of [5], adopted neutron irradiation for their experiments.…”
Section: Related Work In Machine Learning Soft Error Assessment and M...mentioning
confidence: 99%
“…Datasets are not always available and the retraining cost may not be affordable to complex accelerators or every single business case [4]. More traditional hardening approaches (e.g., triple modular redundancy -TMR) have also been either adapted for DNN solutions implemented in FPGAs [5][6] [7] or applied to DNN models running in specialised accelerators [8] [9]. However, the lower resource availability of edge-computing devices makes traditional redundancy mitigation approaches unsuitable for tackling the occurrence of radiation-induced soft errors, given their likely impact on the system's performance and response time.…”
Section: Introductionmentioning
confidence: 99%
“…Secondly, it involves optimizing the design of CNN algorithms. In [17,18], the method of weight quantization can reduce the sensitivity of CNNs to SEUs. In [19], three modern deep convolutional CNNs were tested for their robustness.…”
Section: Introductionmentioning
confidence: 99%