2023
DOI: 10.1016/j.patcog.2022.109179
|View full text |Cite
|
Sign up to set email alerts
|

Watching the BiG artifacts: Exposing DeepFake videos via Bi-granularity artifacts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Furthermore, attention mechanism, augmentation of partial data, and clustering of individual samples are employed to improve the model's robustness. Chen et al [65] exploited a different trace which is bi-granularity artifacts, intrinsic-granularity artifacts that are caused by up-convolution or up-sampling operations, and extrinsic granularity artifacts that are the result of the post-processing step that blends the synthesized face to the original video. Deepfake detection is tackled as a multi-task learning problem where ResNet-18 is used as the backbone feature extractor.…”
Section: Detection Based On Convolutional Tracesmentioning
confidence: 99%
“…Furthermore, attention mechanism, augmentation of partial data, and clustering of individual samples are employed to improve the model's robustness. Chen et al [65] exploited a different trace which is bi-granularity artifacts, intrinsic-granularity artifacts that are caused by up-convolution or up-sampling operations, and extrinsic granularity artifacts that are the result of the post-processing step that blends the synthesized face to the original video. Deepfake detection is tackled as a multi-task learning problem where ResNet-18 is used as the backbone feature extractor.…”
Section: Detection Based On Convolutional Tracesmentioning
confidence: 99%
“…With a very tiny amount of calculation, this module may bring the previous feature's attention information to the present feature and increase the positioning accuracy of the fusion feature based on the channel attention level. The forgery localization 99.95 68.22 72.03 FRLM [28] 99.50 70.58 68.17 F3Net [20] 98.10 71.21 86.10 DMGTN [29] 99.80 72.30 -Face-X-ray [30] 87.40 74.20 85.60 MLDG [31] 98.99 74.56 88.14 GFF [32] 98.36 75.31 85.51 SFDG [33] 99.53 75.83 88.00 SOLA [34] 99.25 76.02 -MultiAtt [35] 99.27 76.65 87.58 BiG-Arts [36] 99.39 77.04 89.92 LTW [37] 99.17 77.14 88.56 FAAFF [38] 99. loss is defined as follows:…”
Section: Forgery Localization Module With Multi-level Feature Fusionmentioning
confidence: 99%