2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01491
|View full text |Cite
|
Sign up to set email alerts
|

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(40 citation statements)
references
References 22 publications
0
23
0
Order By: Relevance
“…RP 2 [46] AdvLogo [78] Ph GAN [79] AdvCam [80] PhysGAN [83] ShadowAttack [81] Fig. 9 Examples of adversarial examples against traffic sign recognition.…”
Section: Light Projection Attack (Lpa)mentioning
confidence: 99%
See 1 more Smart Citation
“…RP 2 [46] AdvLogo [78] Ph GAN [79] AdvCam [80] PhysGAN [83] ShadowAttack [81] Fig. 9 Examples of adversarial examples against traffic sign recognition.…”
Section: Light Projection Attack (Lpa)mentioning
confidence: 99%
“…Recently, Zhong et al [81] argued that the pattern of perturbations generated by prior approaches is conspicuous and attention-grabbed for human observers. To this end, the author proposed to utilize the natural phenomenon (i.e., shadow) to perform physical attacks.…”
Section: Shadowattackmentioning
confidence: 99%
“…Gao et al [10] camouflage perturbations into the haze to mislead the classifier. Zhong et al [45] utilize perturbations visually like shadows to craft adversarial examples. These works generate visually natural images based on theoretical models.…”
Section: Physical Attacksmentioning
confidence: 99%
“…However, these attacks often generate unnatural textures, which are quite visible to human eyes. Thus many works focus on generating adversarial examples with natural styles that appear legitimate to human eyes, e.g., adversarial shadows [44] caused by polygons. Nevertheless, these visually valid adversarial examples are still artifacts and seldom appear in real-world environments.…”
Section: Introductionmentioning
confidence: 99%
“…Zhang et al [197] propose a novel approach for producing physically feasible adversarial camouflage to achieve transferable attacks on detection models. Study [198] explores a new category of optical adversarial examples, generated by a commonly occurring natural phenomenon, shadows. They aim to employ these shadow-based perturbations to achieve naturalistic and inconspicuous physical-world adversarial attacks in blackbox settings.…”
Section: Black-box Attacksmentioning
confidence: 99%