2018
DOI: 10.48550/arxiv.1807.07769
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Physical Adversarial Examples for Object Detectors

Abstract: Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems.In this work, we extend physical attacks to more challenging object detection models, a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 47 publications
(34 citation statements)
references
References 16 publications
0
34
0
Order By: Relevance
“…Unlike [9], they applied both synthetic and physical transformations. Later on, such a work has been extended to attack a general object recognition system [23].…”
Section: B Adversarial Attacks To Cnnsmentioning
confidence: 99%
“…Unlike [9], they applied both synthetic and physical transformations. Later on, such a work has been extended to attack a general object recognition system [23].…”
Section: B Adversarial Attacks To Cnnsmentioning
confidence: 99%
“…Adversarial examples on general object detection have been extensively studied in the recent years [37,16]. A commonly explored domain for adversarial examples in detection is stop sign detection [6,7,8,4]. Stop signs have many structural properties that one can exploit: standard red color, with fixed shape and background.…”
Section: Related Workmentioning
confidence: 99%
“…versarial examples have been witnessed in a wide spectrum of practical systems [51,12], raising an urgent requirement for advanced techniques to achieve robust and reliable decision making, especially in safety-critical scenarios [13].…”
Section: Introductionmentioning
confidence: 99%