2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00460
|View full text |Cite
|
Sign up to set email alerts
|

Seeing What a GAN Cannot Generate

Abstract: Despite the success of Generative Adversarial Networks (GANs), mode collapse remains a serious issue during GAN training. To date, little work has focused on understanding and quantifying which modes have been dropped by a model. In this work, we visualize mode collapse at both the distribution level and the instance level. First, we deploy a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set. Differences in st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
219
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 329 publications
(225 citation statements)
references
References 32 publications
6
219
0
Order By: Relevance
“…Training GAN is unstable because it attempts to search for a Nash equilibrium in a high-dimensional space. Besides, the mode collapse is a serious issue during GAN training, as addressed in Bau et al [34]. These problems make it difficult to train generative adversarial networks.…”
Section: Methodsmentioning
confidence: 99%
“…Training GAN is unstable because it attempts to search for a Nash equilibrium in a high-dimensional space. Besides, the mode collapse is a serious issue during GAN training, as addressed in Bau et al [34]. These problems make it difficult to train generative adversarial networks.…”
Section: Methodsmentioning
confidence: 99%
“…Most of the recent works include GANs generated images and videos and detecting counterfeits using GANs itself from both GANs generated deepfakes and GANs detection of deepfakes. Bau et al [38] concluded that GANs have limited role and capacity in the generation and analyzed that the pre-trained GANs model is unable to grab the image structures from the given datasets. Having limitations in a generation, it is obvious that the GANs model is not so reliable in detecting deepfakes alone.…”
Section: Artifacts From Gansmentioning
confidence: 99%
“…Although WGAN-GP [34] overcame the mode collapse and training convergence issues, the use of gradient penalty weakened the representation capacity of GAN [40].…”
Section: Introductionmentioning
confidence: 99%
“…To deal with the problem and enlightened by the results [38], [40], here we propose a residual-CNN-block generator and discriminator for noise learning with the least squares [41]. The structural similarity (SSIM) [36] and L1 losses integrated into the overall objective function.…”
Section: Introductionmentioning
confidence: 99%