2020
DOI: 10.1007/978-3-030-62460-6_51
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Example Attacks in the Physical World

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…All detailed results are provided on our companion website [55]. Using other adversarial attacks, such as the fast gradient sign method (FGSM) [36] and iterative FGSM (i-FGSM) [73], will give the same conclusion.…”
Section: Threats To Validitymentioning
confidence: 94%
“…All detailed results are provided on our companion website [55]. Using other adversarial attacks, such as the fast gradient sign method (FGSM) [36] and iterative FGSM (i-FGSM) [73], will give the same conclusion.…”
Section: Threats To Validitymentioning
confidence: 94%
“…Physical Attacks on Sensing: Existing physical attack methods can be mainly summarized into three categories based on their sensing modality: (1) Camera: Many attacks on cameras use physical patterns to spoof recognition or classification, such as special stickers [37], [33], [77], T-shirts [88], or posters [86]. (2) Lidar: Most approaches are achieved by placing objects with special geometric shapes [69], [26], [25], [80], [63] to spoof the segmentation or recognition models. (3) Microphone: Most approaches use speakers to play unusual noises over-the-air to make the Automatic Speech Recognition unable to distinguish or misunderstand voice commands [68], [28], [87], [23].…”
Section: Related Workmentioning
confidence: 99%
“…With the digital proliferation, increasingly sophisticated attack strategies are emerging to take the offensive against Deep Neural Networks (DNNs). A thorough investigation has been conducted on the vulnerability of DNNs to various malicious attacks capable of creating adversarial examples with the goal of breaking the prediction of DNNs [30]. There are two major categories of these attacks: [31].…”
Section: White-box Adversarial Attacksmentioning
confidence: 99%