2022
DOI: 10.1109/tifs.2022.3186803
|View full text |Cite
|
Sign up to set email alerts
|

LiSiam: Localization Invariance Siamese Network for Deepfake Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(7 citation statements)
references
References 45 publications
0
7
0
Order By: Relevance
“…For example, Wang et al [24] proposed a multi-regional attention mechanism to enhance deepfake detection performance. Additionally, vision Transformers have been employed to establish an endto-end deepfake detection framework [17], [40]. Furthermore, recent research has utilized pre-trained networks and developed Lipforensics [11] for analyzing lip prints in lip-reading tasks.…”
Section: A Deepfake Detectionmentioning
confidence: 99%
“…For example, Wang et al [24] proposed a multi-regional attention mechanism to enhance deepfake detection performance. Additionally, vision Transformers have been employed to establish an endto-end deepfake detection framework [17], [40]. Furthermore, recent research has utilized pre-trained networks and developed Lipforensics [11] for analyzing lip prints in lip-reading tasks.…”
Section: A Deepfake Detectionmentioning
confidence: 99%
“…For example, Wang et al [24] proposed a multi-regional attention mechanism to enhance deepfake detection performance. Additionally, vision Transformers have been employed to establish an end-to-end deepfake detection framework [17], [40]. Furthermore, recent research has utilized pre-trained networks and developed Lipforensics [11] for analyzing lip prints in lip-reading tasks.…”
Section: Related Work a Deepfake Detectionmentioning
confidence: 99%
“…The similar idea is adopted in RECCE [49], where only real images are reconstructed from their noisy versions. Lisiam [50] explores the robust representation by using localization invariance loss, while [51] and [52] exploit the relation between local regions to reveal the discriminative information. Additionally, RFM [53] proposes an attentionbased erasing operation to encourage the model to learn features from more potential manipulation regions.…”
Section: B Face Forgery Detection Via Representation Learningmentioning
confidence: 99%
“…It is challenging to extract representative features from degraded inputs since the forgery clues are too subtle to mine [50], [70]. In this paper, we cast the problem of detecting face forgery as a prototype learning task.…”
Section: B Fine-grained Triplet Relation Learningmentioning
confidence: 99%
See 1 more Smart Citation