2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01167
|View full text |Cite
|
Sign up to set email alerts
|

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
52
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(52 citation statements)
references
References 14 publications
0
52
0
Order By: Relevance
“…Limitation Feature [31] W Limited to Eyeglasses Physically realizable and inconspicuous [7] B Needs 10K queries Minimum required perturbation [36] W Limited to infrared perturbations Invisible to human eyes [37] W Limited to female faces Realistic make up as perturbation [25] W/ B Limited to light projection Transformation-invariant pattern generation [34] B Extra regularization techniques are required Face-like features as adversarial perturbations [13] B Limited to unsophisticated adversarial instances Mimicking real world distortions [33] B Needs huge number of queries Adversarial Morphing Attack [3] B Low quality adversarial instances Fast Geometrically-Perturbed Faces not get access to the model details. Black-box setting is more realistic since no internal model information is known to the attackers except the output of the model including hard-label prediction and confidence score.…”
Section: Research Settingmentioning
confidence: 99%
“…Limitation Feature [31] W Limited to Eyeglasses Physically realizable and inconspicuous [7] B Needs 10K queries Minimum required perturbation [36] W Limited to infrared perturbations Invisible to human eyes [37] W Limited to female faces Realistic make up as perturbation [25] W/ B Limited to light projection Transformation-invariant pattern generation [34] B Extra regularization techniques are required Face-like features as adversarial perturbations [13] B Limited to unsophisticated adversarial instances Mimicking real world distortions [33] B Needs huge number of queries Adversarial Morphing Attack [3] B Low quality adversarial instances Fast Geometrically-Perturbed Faces not get access to the model details. Black-box setting is more realistic since no internal model information is known to the attackers except the output of the model including hard-label prediction and confidence score.…”
Section: Research Settingmentioning
confidence: 99%
“…Besides presentation attacks, adversarial attacks against face recognition have recently been proved as a new type of threat. Research has shown that face recognition systems can be easily spoofed by impersonation attackers wearing a small printed adversarial patch [163,217,339,405]. Concerns on this new type of risk have made the adversarial attack and defense in face recognition an important application research direction of adversarial ML.…”
Section: Face Recognitionmentioning
confidence: 99%
“…Though showing dominant advance in various tasks, deep neural networks (DNNs) are proven vulnerable to adversarial examples (Goodfellow, Shlens, and Szegedy 2015), i.e., adding well-designed human-imperceptible perturbations into natural images can mislead DNNs. Such a drawback has raised high-security concerns when deploying DNNs in security-sensitive scenarios (Xiao et al 2021;Fang et al 2021), which has caused researchers to attach attention to model security Naseer et al 2020). According to the knowledge of victim model, the adversarial attack can be categorized into white-box and blackbox attack.…”
Section: Introductionmentioning
confidence: 99%