2020
DOI: 10.1016/j.sigpro.2020.107616
|View full text |Cite
|
Sign up to set email alerts
|

Identification of deep network generated images using disparities in color components

Abstract: With the powerful deep network architectures, such as generative adversarial networks and variational autoencoders, large amounts of photorealistic images can be generated. The generated images, already fooling human eyes successfully, are not initially targeted for deceiving image authentication systems. However, research communities as well as public media show great concerns on whether these images would lead to serious security issues. In this paper, we address the problem of detecting deep network generat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
110
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 182 publications
(110 citation statements)
references
References 29 publications
0
110
0
Order By: Relevance
“…Compared to popularity of exploring strategies for synthesizing face images with GANs, methodologies to differentiate the real and synthesized images are far from satisfactory. Li et al [10] observed the color mismatch in H, S, V and Cb, Cr, Y channels between real and GAN-generated images. Similarly, McCloskey and Albright identified the frequency of saturated pixels and color image statistics of the GAN-generated images are different from the ones captured by cameras [13].…”
Section: Detection Methods For Gan-synthesized Imagesmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to popularity of exploring strategies for synthesizing face images with GANs, methodologies to differentiate the real and synthesized images are far from satisfactory. Li et al [10] observed the color mismatch in H, S, V and Cb, Cr, Y channels between real and GAN-generated images. Similarly, McCloskey and Albright identified the frequency of saturated pixels and color image statistics of the GAN-generated images are different from the ones captured by cameras [13].…”
Section: Detection Methods For Gan-synthesized Imagesmentioning
confidence: 99%
“…Unlike previous image/video manipulation methods, realistic images are generated completely from random noise through a deep neural network. Current detection methods are based on low level features such as color disparities [10,13], or using the whole image as input to a neural network to extract holistic features [19].…”
Section: Introductionmentioning
confidence: 99%
“…Several methods have been proposed in the area of image forensics over the past years [25,26,27,28,29]. Recent approaches have focused on applying deep learning based methods to detect tampered images [30,31,32,33,34,9,35] The detection of GAN images is a new area in image forensics and there are very few papers in this area [36,37,38,39,40,41,42,43,44,45]. Related fields also include detection of computer generated (CG) images [46,47,48,49].…”
Section: Related Workmentioning
confidence: 99%
“…The top results they obtained using a combination of residual features [50,9] and deep learning [51]. Similar to [36], the authors in [38] compute the residuals of high pass filtered images and then extract co-occurrence matrices on these residuals, which are then concatenated to form a feature vector that can distinguish real from fake GAN images. In contrast to these approaches, our approach does not need any image resid-uals to be computed.…”
Section: Related Workmentioning
confidence: 99%
“…Some of them exploit specific facial artifacts, like asymmetries in the colour of the eyes, or artifacts arising from an imprecise estimation of the underlying geometry, especially on areas around the nose, the border of the face, and the eyebrows [7]. Color information is instead exploited in [8], [9]. In particular, [8] proposes to use features shared by different GAN architectures, based on the way they transform a multichannel feature map into a 3-channel color image.…”
Section: Introductionmentioning
confidence: 99%