2022
DOI: 10.1007/978-3-031-19781-9_5
|View full text |Cite
|
Sign up to set email alerts
|

FingerprintNet: Synthesized Fingerprints for Generated Image Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…Moreover, the backward process involves learning the denoising process, that is, from a noise‐corrupted image to a clear image. Currently, there are still very few studies carried out in the sense of using diffusion models to generate deepfake (Jeong, Kim, Ro, Kim, & Choi, 2022; Mandelli et al, 2022) and, in the same sense, few studies involving the detection of deepfake also generated by diffusion models.…”
Section: Discussion and Open Issuesmentioning
confidence: 99%
“…Moreover, the backward process involves learning the denoising process, that is, from a noise‐corrupted image to a clear image. Currently, there are still very few studies carried out in the sense of using diffusion models to generate deepfake (Jeong, Kim, Ro, Kim, & Choi, 2022; Mandelli et al, 2022) and, in the same sense, few studies involving the detection of deepfake also generated by diffusion models.…”
Section: Discussion and Open Issuesmentioning
confidence: 99%
“…In the 1-and 2-class settings, our FreqNet also achieves gains of 2.9% and 0.4% compared to Ojha. Furthermore, compared to FingerprintNet (Jeong et al 2022b) tested on six unseen models, our FreqNet achieves a marked improvement in mean accuracy, rising from 82.6% to 90.6% and exhibiting a significant gain of 8.0%. Additionally, we provide results on 9 models from self-synthesis in Table 2.…”
Section: Deepfake Performance On Real-world Scenementioning
confidence: 95%
“…Moreover, the backward process involves learning the denoising process, i.e., from a noise-corrupted image to a clear image. Currently, there are still very few studies carried out in the sense of using diffusion models to generate deepfake [142,143] and, in the same sense, few studies involving the detection of deepfake also generated by diffusion models.…”
Section: Opportunities and Future Challengesmentioning
confidence: 99%