2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00103
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Threats to DeepFake Detection: A Practical Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(17 citation statements)
references
References 26 publications
1
16
0
Order By: Relevance
“…For example, Carlini et al [9] successfully evade detection against Frank et al's [5] state-of-the-art frequency-space DNN defense by flipping the lowest bit in each pixel of deepfake images. Similar efforts by Neekhara et al [10] and others corroborate this vulnerability.…”
Section: Introductionsupporting
confidence: 77%
See 1 more Smart Citation
“…For example, Carlini et al [9] successfully evade detection against Frank et al's [5] state-of-the-art frequency-space DNN defense by flipping the lowest bit in each pixel of deepfake images. Similar efforts by Neekhara et al [10] and others corroborate this vulnerability.…”
Section: Introductionsupporting
confidence: 77%
“…In fact, recent work from Carlini and Farid [9] has shown that with as little as the ability to flip the lowest bit in each pixel, deepfake images can be attacked to evade detection by modern DNN deepfake detection classifiers (e.g., those from Frank et al [5] and Wang et al [22]). Neekhara et al [10] have also explored black-box attacks and Universal Adversarial Perturbation [28] style attacks on deepfake detection.…”
Section: Adversarial Examplesmentioning
confidence: 99%
“…Li et al [71] demonstrated that fake facial images generated using adversarial points on a face manifold can defeat two strong forensic classifiers. Even methods that won the Deepfake Detection Challenge (DFDC) [72] were easily bypassed in a practical attack scenario using transferable and accessible adversarial attacks [73].…”
Section: Robust Deepfake Detectionmentioning
confidence: 99%
“…Then, they propose two methods with the Lipschitz regularization (Woods et al, 2019) and deep image prior (Ulyanov et al, 2018) to improve the adversarial robustness of DeepFake detectors. Neekhara et al (2020) further study the adversarial attack-based evasion methods on the more challenging DeepFake Detection Challenge (DFDC) dataset (Dolhansky et al, 2020) and find that the input-preprocessing steps, as well as face detection methods across DeepFake detectors, make the adversarial transferability difficult. Then, they implement a high transferability attack method based on the universal adversarial perturbations to alleviate the challenges.…”
Section: Evasion Of Deepfake Detectionmentioning
confidence: 99%
“…Mirsky and Lee (2020) pivoted their survey to the DeepFake generation aspect with detailed generation DNN model charts. Neekhara et al (2020) provided a practical perspective that focuses on the adversarial threats to DeepFake detection. Verdoliva (2020) discussed the interplay between multimedia forensics and DeepFakes.…”
Section: Introductionmentioning
confidence: 99%