2019
DOI: 10.48550/arxiv.1912.05021
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Design and Interpretation of Universal Adversarial Patches in Face Detection

Abstract: We consider universal adversarial patches for facessmall visual elements whose addition to a face image reliably destroys the performance of face detectors. Unlike previous work that mostly focused on the algorithmic design of adversarial examples in terms of improving the success rate as an attacker, in this work we show an interpretation of such patches that can prevent the state-of-the-art face detectors from detecting the real faces. We investigate a phenomenon: patches designed to suppress real face detec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…Adversarial attacks can fool models in lots of computer vision tasks, e.g. images classification [6,4,13], images segmentation [9,7], face detection [17,2] and object detection [16,15,8,12]. Adversarial attacks against object detection can be divided into two categories: 1) whole-pixel attacks, which can add perturbations to all pixels in images under L p constraints (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial attacks can fool models in lots of computer vision tasks, e.g. images classification [6,4,13], images segmentation [9,7], face detection [17,2] and object detection [16,15,8,12]. Adversarial attacks against object detection can be divided into two categories: 1) whole-pixel attacks, which can add perturbations to all pixels in images under L p constraints (e.g.…”
Section: Introductionmentioning
confidence: 99%