2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00415
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(32 citation statements)
references
References 13 publications
0
32
0
Order By: Relevance
“…They compute the adversarial illumination pattern on a image of the identity and use the cap to project that pattern on the face in physical world while presenting the face to the vision system. A related concept of 'adversarial light projection' is studied in [252] that projects a rather conspicuous pattern on faces to evade FaceNet model [253] in white-box settings. Other examples of physical world attack on face recognition systems include AdvHat [254] and adversarial patches for faces [255].…”
Section: Face Recognitionmentioning
confidence: 99%
“…They compute the adversarial illumination pattern on a image of the identity and use the cap to project that pattern on the face in physical world while presenting the face to the vision system. A related concept of 'adversarial light projection' is studied in [252] that projects a rather conspicuous pattern on faces to evade FaceNet model [253] in white-box settings. Other examples of physical world attack on face recognition systems include AdvHat [254] and adversarial patches for faces [255].…”
Section: Face Recognitionmentioning
confidence: 99%
“…Nguyen et al [55] evaluated their approach against FaceNet, SphereFace, and one commercial-off-the-shelf FR system and confirmed the models' vulnerability to the light projection attacks. They used a similarity score threshold corresponding to FAR of 0.01% to determine if the attack is successful or not.…”
Section: Comparison Of Different Adversaries On Evaluation Processmentioning
confidence: 88%
“…• Dodging attack occurs when the attacker tries to have a face misidentified as any other arbitrary face. It is also known as obfuscation attack in the literature [55], [56]. • Evasion attack tries to evade the system by altering samples during the testing phase yet not influencing the training data.…”
Section: A Terms and Definitionsmentioning
confidence: 99%
See 2 more Smart Citations