2021
DOI: 10.48550/arxiv.2104.11101
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors

Abstract: This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes. We sample triangular faces on a reference human mesh, and create an adversarial texture atlas over those faces. The adversarial texture is transferred to human meshes in various poses, which are rendered onto a collection of real-world background images. Contrary to the traditional patch-based adversarial attacks, where prior work attempts to fool trained object detectors using appended adver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Physical Adversarial Attacks The purpose of physical adversarial attack is to craft localized visible perturbations that have the potential to deceive DNN-based vision systems. According to space dimension, physical attack could be classified into 2D physical attack [16,17,19,20,21,22] and 3D physical attack [23,14,18,13]. Sharif et al [19] developed a method of deceiving facerecognition system by generating physical eyeglass frames.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Physical Adversarial Attacks The purpose of physical adversarial attack is to craft localized visible perturbations that have the potential to deceive DNN-based vision systems. According to space dimension, physical attack could be classified into 2D physical attack [16,17,19,20,21,22] and 3D physical attack [23,14,18,13]. Sharif et al [19] developed a method of deceiving facerecognition system by generating physical eyeglass frames.…”
Section: Related Workmentioning
confidence: 99%
“…Athalye et al [23] developed Expectation Over Transformation, the first approach of generating 3D robust adversarial samples. Maesumi et al [18] presented a universal 3D-to-2D adversarial attack method, where a structured patch was sampled from the reference human model and the human pose could be adjusted freely during training. Wang et al [13] proposed Dual Attention Suppression (DAS) attack based on an open source 3D virtual environment, and Jiang et al [14] extended DAS by presenting a full-coverage adversarial attack.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial attacks have been leveraged to test the robustness of many computer vision based systems (Vakhshiteh, Nickabadi, and Ramachandra 2020). In these attacks, some noise is intentionally added to the images (Goodfellow, Shlens, and Szegedy 2014;Xiao et al 2019;Qiu et al 2020;Xu et al 2020;Maesumi et al 2021;Duan et al 2020) or videos (Jiang et al 2019;Chen et al 2021) and then they are fed to the system to check how immune or robust they are to these changes. These may also be used to increase privacy for non-consenting individuals (equalAIs 2021).…”
Section: Adversarial Attack In Computer Visionmentioning
confidence: 99%
“…Adversarial attacks have been leveraged to test the robustness of many computer vision based systems (Vakhshiteh, Nickabadi, and Ramachandra 2020). In these attacks, some noise is intentionally added to the images (Goodfellow, Shlens, and Szegedy 2014;Xiao et al 2019;Qiu et al 2020;Xu et al 2020;Maesumi et al 2021;Duan et al 2020) or videos (Jiang et al 2019;Chen et al 2021) and then they are fed to the system to check how immune or robust they are to these changes.These may also be used to increase privacy for non-consenting individuals (equalAIs 2021). These techniques may range from being as simple as directly modifying the image file's bits, using image manipulation software like GIMP (GIMP 2021), using libraries like (Bloice, Roth, and Holzinger 2019), or as sophisticated and involved as using game-theoretic techniques (Oh, Fritz, and Schiele 2017), and deep learning techniques (Chandrasekaran et al 2020;Goel et al 2018;Garofalo et al 2018;Bose and Aarabi 2018;Massoli et al 2021;Xiao et al 2019;Jiang et al 2019;Qiu et al 2020;Xu et al 2020;Maesumi et al 2021;Duan et al 2020;Chen et al 2021).…”
Section: Adversarial Attack In Computer Visionmentioning
confidence: 99%