Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security 2018
DOI: 10.1145/3243734.3278519
|View full text |Cite
|
Sign up to set email alerts
|

Practical Fault Attack on Deep Neural Networks

Abstract: As deep learning systems are widely adopted in safety-and securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
107
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 106 publications
(107 citation statements)
references
References 14 publications
0
107
0
Order By: Relevance
“…Progressive bit search is the very first attack bit searching algorithm developed to malfunction a quantized neural network through perturbation of stored model parameters using row hammer attack. We already showed in previous section that the previous attack algorithms [12,20] on floating-point model parameters are not efficient. They do not consider that attacking floating point DNN model is as easy as flipping most significant exponent bits of any random weights.…”
Section: Comparison To Other Methodsmentioning
confidence: 95%
“…Progressive bit search is the very first attack bit searching algorithm developed to malfunction a quantized neural network through perturbation of stored model parameters using row hammer attack. We already showed in previous section that the previous attack algorithms [12,20] on floating-point model parameters are not efficient. They do not consider that attacking floating point DNN model is as easy as flipping most significant exponent bits of any random weights.…”
Section: Comparison To Other Methodsmentioning
confidence: 95%
“…In [7] the authors presents a practical laser fault attack which target creating fault in the Deep Neural Networks (DNN) on a low-cost micro-controller.…”
Section: Hardware Based Approachmentioning
confidence: 99%
“…erefore, it is essential to minimize the number of modi ed parameters by our fault sneaking a ack. Recently, [17] implements the DNN fault injection a ack [16] physically on embedded systems using laser beam. In particular, [17] injects faults into the widely used activation functions in DNNs and demonstrates the possibility to achieve misclassi cations by injecting faults into the DNN hidden layer.…”
Section: Practical Fault Injection Techniquesmentioning
confidence: 99%