2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00573
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Adversarial Fake Images on Face Manifold

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…Huang et al [70] showed the existence of both individual and universal adversarial perturbations that can cause well-performing deepfake classifiers to misbehave. Li et al [71] demonstrated that fake facial images generated using adversarial points on a face manifold can defeat two strong forensic classifiers. Even methods that won the Deepfake Detection Challenge (DFDC) [72] were easily bypassed in a practical attack scenario using transferable and accessible adversarial attacks [73].…”
Section: Robust Deepfake Detectionmentioning
confidence: 99%
“…Huang et al [70] showed the existence of both individual and universal adversarial perturbations that can cause well-performing deepfake classifiers to misbehave. Li et al [71] demonstrated that fake facial images generated using adversarial points on a face manifold can defeat two strong forensic classifiers. Even methods that won the Deepfake Detection Challenge (DFDC) [72] were easily bypassed in a practical attack scenario using transferable and accessible adversarial attacks [73].…”
Section: Robust Deepfake Detectionmentioning
confidence: 99%
“…For adversarial attacks in face forgery detection, some works [4,13,21,26,34] explore the robustness of models in different settings. Li et al [26] manipulate the noise vectors and latent vectors of Style-GAN [48] with gradients to fool the face forgery models. Neekhara et al [34] perform adversarial attacks in a black-box setting for face forgery detection.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…For instance, a forged face image that is classified correctly as fake by adding adversarial perturba-tions can fool the detector to make a wrong decision as real. Existing works [4,13,21,26,34] have explored the robustness of face forgery detection methods, but these methods add adversarial perturbations or patches on the original images, which are easily recognized by human eyes. In brief, the adversarial examples aim to fool a face forgery detector, while the objective of face forgery generation is to fool humans.…”
Section: Introductionmentioning
confidence: 99%
“…Regarding anti-forensics, in recent years, it has been shown that forensic detectors based on deep neural networks (DNNs) are vulnerable to adversarial perturbations [17][18][19][20][21]. By adding carefully designed and imperceptible anti-forensic noise to the fake face image, the forensic detectors are rendered ineffective.…”
Section: Introductionmentioning
confidence: 99%
“…By adding carefully designed and imperceptible anti-forensic noise to the fake face image, the forensic detectors are rendered ineffective. However, the existing attack-based methods [17][18][19][20][21] use the method of perturbing all pixels, and there are many redundant and meaningless perturbations. In addition, the transferability of these methods is still insufficient.…”
Section: Introductionmentioning
confidence: 99%