2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00175
|View full text |Cite
|
Sign up to set email alerts
|

Robust Physical-World Attacks on Deep Learning Visual Classification

Abstract: Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
1,242
0
3

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,755 publications
(1,248 citation statements)
references
References 36 publications
3
1,242
0
3
Order By: Relevance
“…For instance, regarding a supervised ML classification model, adversarial attacks try to discover the minimum changes that should be applied to the input data in order to cause a different classification. This has happened regarding computer vision systems of autonomous vehicles; a minimal change in a stop signal, imperceptible to the human eye, led vehicles to detect it as a 45 mph signal [359]. For the particular case of DL models, available solutions such as Cleverhans [360] seek to detect adversarial vulnerabilities, and provide different approaches to harden the model against them.…”
Section: Explanations For Ai Security: Xai and Adversarial Machine Lementioning
confidence: 99%
“…For instance, regarding a supervised ML classification model, adversarial attacks try to discover the minimum changes that should be applied to the input data in order to cause a different classification. This has happened regarding computer vision systems of autonomous vehicles; a minimal change in a stop signal, imperceptible to the human eye, led vehicles to detect it as a 45 mph signal [359]. For the particular case of DL models, available solutions such as Cleverhans [360] seek to detect adversarial vulnerabilities, and provide different approaches to harden the model against them.…”
Section: Explanations For Ai Security: Xai and Adversarial Machine Lementioning
confidence: 99%
“…Adversarial attacks can have serious implications in many security-related applications as well as in the physical world. For example, in 2018, Eykholt et al showed that placing small stickers on traffic signs (here: stop signs) can induce a misclassification rate of 100% in lab settings and 85% in a field test where video frames captured from a moving vehicle [172].…”
Section: Adversarial Learningmentioning
confidence: 99%
“…So the linear term of has zero expectation, and the quadratic term is directly dependent on variance of noise and the trace of Hessian. As a convex relaxation, if we assume • f 0 is convex, then we have that d · A max ≥ Tr(A) ≥ A max for A ∈ S d×d + , we can rewrite (9) as…”
Section: Mathematical Explanationsmentioning
confidence: 99%