2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2017
DOI: 10.1109/iccad.2017.8203770
|View full text |Cite
|
Sign up to set email alerts
|

Fault injection attack on deep neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
169
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 166 publications
(169 citation statements)
references
References 17 publications
0
169
0
Order By: Relevance
“…The authors explore both active and passive attackers, i.e., attackers that can actively input their own images to the accelerator and attackers that can only observe user inputs. Another line of attacks attempts to induce faults in order to cause misclasifications [52], [53] and relies on a microarchitectural or device-level attacks, such as RowHammer [54]. Probing attacks: Probing attacks assume that an attacker is able to access the individual components of the device, e.g., the CPU/GPU/ASIC, the RAM memory, non-volatile storage, or busses, but is not able to perform invasive attacks that access the internals of the chips.…”
Section: Attacks On Deployed Neural Networkmentioning
confidence: 99%
“…The authors explore both active and passive attackers, i.e., attackers that can actively input their own images to the accelerator and attackers that can only observe user inputs. Another line of attacks attempts to induce faults in order to cause misclasifications [52], [53] and relies on a microarchitectural or device-level attacks, such as RowHammer [54]. Probing attacks: Probing attacks assume that an attacker is able to access the individual components of the device, e.g., the CPU/GPU/ASIC, the RAM memory, non-volatile storage, or busses, but is not able to perform invasive attacks that access the internals of the chips.…”
Section: Attacks On Deployed Neural Networkmentioning
confidence: 99%
“…Past work has introduced several ways to inject faults into DNNs themselves for compromising their functionality. In [5], attackers fool DNNs to make mistakes by modifying their parameters through fault injection, in which single bias attack modifies one parameter with a large perturbation for misclassification and gradient descent attack considers stealthiness by adding small perturbations on a number of parameters. Reverse-engineering attacks [6] can identify model parameters in the off-chip memory, which may be stealthily replaced by attackers.…”
Section: B Related Work and Motivationmentioning
confidence: 99%
“…In this section, we evaluate the performance of the proposed validation scheme considering its detection rate under malicious and random parameter perturbations. The malicious perturbations are generated according to the attacks proposed in [5] and the random perturbations are to add gaussian noises. We implement each kind of parameter perturbation 10000 times against the MNIST and CIFAR-10 models, and then calculate the detection rate by observing whether the perturbations will change the DNN outputs of the generated functional tests.…”
Section: Perturbation Detection Ratementioning
confidence: 99%
See 2 more Smart Citations