2023
DOI: 10.1109/ojsp.2023.3337714
|View full text |Cite
|
Sign up to set email alerts
|

Synthbuster: Towards Detection of Diffusion Model Generated Images

Quentin Bammey

Abstract: Synthetically-generated images are getting increasingly popular. Diffusion models have advanced to the stage where even non-experts can generate photo-realistic images from a simple text prompt. They expand creative horizons but also open a Pandora's box of potential disinformation risks. In this context, the present corpus of synthetic image detection techniques, primarily focusing on older generative models like Generative Adversarial Networks, finds itself ill-equipped to deal with this emerging trend. Reco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

2
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 42 publications
2
0
0
Order By: Relevance
“…Steganalysis is performed with EfficientNet-v2 against the MiPOD embedding algorithm with payload 0.4 bpp (Bits Per Pixels). Similar conclusions can be drawn from [3]: the inconsistency between deepfake cover-sources is very important even for models supposedly close to each other (see the family of StableDiffusion for instance). However, one can observe that the only common cover-source, namely StableDiffusion-XL, exhibits similar results as the one reported in Table 6.…”
Section: Results On the Robustness Of Steganalysis: Inconsistency Bet...supporting
confidence: 73%
See 1 more Smart Citation
“…Steganalysis is performed with EfficientNet-v2 against the MiPOD embedding algorithm with payload 0.4 bpp (Bits Per Pixels). Similar conclusions can be drawn from [3]: the inconsistency between deepfake cover-sources is very important even for models supposedly close to each other (see the family of StableDiffusion for instance). However, one can observe that the only common cover-source, namely StableDiffusion-XL, exhibits similar results as the one reported in Table 6.…”
Section: Results On the Robustness Of Steganalysis: Inconsistency Bet...supporting
confidence: 73%
“…Note that these important inconsistencies do not stem from different prompts; indeed, we used the same prompts for every generator; therefore, high inconsistencies can only be explained by the deepfake cover-sources. We further confirmed these observations using an additional dataset from [3]. Table 8 shows a selective subset of inconsistencies obtained when training on the same 15 generators as in Table 7 and 6, but testing on these additional images, as for Table 6.…”
Section: Results On the Robustness Of Steganalysis: Inconsistency Bet...supporting
confidence: 62%