2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00108
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

Abstract: Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (AdvCam), to craft and camouflage physicalworld adversarial examples into natural styles that appear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
118
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 192 publications
(118 citation statements)
references
References 20 publications
0
118
0
Order By: Relevance
“…14. Representative examples of successful physical world attacks to fool recognition systems with AdvCam [273], RP2 [123] and adversarial patch [45]. object detectors can be fooled in vehicle detectioin scenarios.…”
Section: Physical World Attacksmentioning
confidence: 99%
“…14. Representative examples of successful physical world attacks to fool recognition systems with AdvCam [273], RP2 [123] and adversarial patch [45]. object detectors can be fooled in vehicle detectioin scenarios.…”
Section: Physical World Attacksmentioning
confidence: 99%
“…Although external features are well defined, how to construct them is still difficult since the learning dynamic of DNNs remains unclear and the concept of features itself is complicated. However, at least we know that the image style can serve as a feature for the learning of DNNs in imagerelated tasks, based on some recent studies (Geirhos et al 2019;Duan et al 2020;Cheng et al 2021). As such, we can use style transfer (Johnson, Alahi, and Fei-Fei 2016;Huang and Belongie 2017;Chen et al 2020) for embedding external features.…”
Section: Embedding External Featuresmentioning
confidence: 99%
“…Therefore, we can see that deep learning is facing many security problems. The scenarios which the adversaries can attack includes face recognition( [13]), disease diagnosis( [14]), spam detection( [15]), autonomous driving( [16]) and so on. How to defend against these attacks is still an open problem for researchers.…”
Section: Introductionmentioning
confidence: 99%