2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9191083
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Analysis Of Overfitting And Mode Drop In Gan Training

Abstract: We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective. We show that when stochasticity is removed from the training procedure, GANs can overfit and exhibit almost no mode drop. Our results shed light on important characteristics of the GAN training procedure. They also provide evidence against prevailing intuitions that GANs do not memorize the training set, and that mode dropping is mainly due to properties of the GAN objective rather than how it is opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(19 citation statements)
references
References 5 publications
1
18
0
Order By: Relevance
“…Although GAN is used to obtain reliable data relatively easily, the performance of GAN itself is also closely related to how much input data it contains 22 . The key problem with small datasets is that the discriminator overfits the training examples 32 . In small data sets feedback to the generator becomes meaningless and training starts to diverge.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although GAN is used to obtain reliable data relatively easily, the performance of GAN itself is also closely related to how much input data it contains 22 . The key problem with small datasets is that the discriminator overfits the training examples 32 . In small data sets feedback to the generator becomes meaningless and training starts to diverge.…”
Section: Discussionmentioning
confidence: 99%
“…22 The key problem with small datasets is that discriminator overfits the training examples. 32 In small data sets feedback to the generator becomes meaningless and training starts to diverge. Medical data is particularly costly and timeconsuming to integrate and standardize in one place compared to other data.…”
Section: Ganspacementioning
confidence: 99%
“…However, the theoretical analysis conducted in (Arora et al 2017) showed that the training objective can approach its optimum value even though the generated distribution is far from the target distribution. Moreover, GAN models suffer from well-known problems, such as mode dropping or mode collapse (Yazici et al 2020), which affect the quality of the generated sample. Understanding whether GANs can reproduce the same level of similarity exhibited by the original Monte Carlo data set, is essential in the context of scientific simulations.…”
Section: Discussionmentioning
confidence: 99%
“…[1] studies the relationship between GAN memorization and discriminator size, concluding that the distributions learnt by GANs have significantly less support than real distributions. [28] then shows that with fixed latent codes during the training process the authors can achieve GAN memorization.…”
Section: Related Workmentioning
confidence: 97%