2020
DOI: 10.48550/arxiv.2003.10596
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Perturbations Fool Deepfake Detectors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…In the field of deepfake detection, neural networks are widely used to distinguish forgery videos. However, due to inherent defects, neural networks cannot resist attacks of adversarial samples [86][87][88]. To this end, researchers need to design more robust algorithms that can In recent years, deepfake technologies, which rely on deep learning, are developing at an unprecedented rate.…”
Section: Antiforensicsmentioning
confidence: 99%
“…In the field of deepfake detection, neural networks are widely used to distinguish forgery videos. However, due to inherent defects, neural networks cannot resist attacks of adversarial samples [86][87][88]. To this end, researchers need to design more robust algorithms that can In recent years, deepfake technologies, which rely on deep learning, are developing at an unprecedented rate.…”
Section: Antiforensicsmentioning
confidence: 99%
“…Deep-Fake algorithms can be used for malicious activities that may cause psychological, political, monetary, and physical harm. Given such implications, defenses have been developed using machine learning models to differentiate between real and fake videos (DeepFakes) [13,15].…”
Section: Novel Defense Directionsmentioning
confidence: 99%
“…Recently, gradient-based adversarial attacks have also been applied on CNN-based DeepFake detection systems to expose their vulnerabilities to adversarial examples [5,13,24]. While some of this past work [5,13] focuses on attacks on image classification models, the authors of [24] study the vulnerability of video DeepFake detection methods which follow the same detection pipeline as the methods studied in our work.…”
Section: Prior Work On Fooling Deepfake Detectorsmentioning
confidence: 99%
“…Recently, gradient-based adversarial attacks have also been applied on CNN-based DeepFake detection systems to expose their vulnerabilities to adversarial examples [5,13,24]. While some of this past work [5,13] focuses on attacks on image classification models, the authors of [24] study the vulnerability of video DeepFake detection methods which follow the same detection pipeline as the methods studied in our work. While this past work demonstrates that adversarial examples can fool video DeepFake detectors, designing such adversarial videos requires complete access to the victim model architecture and parameters (white-box attack).…”
Section: Prior Work On Fooling Deepfake Detectorsmentioning
confidence: 99%
See 1 more Smart Citation