2018
DOI: 10.48550/arxiv.1804.04391
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MGGAN: Solving Mode Collapse using Manifold Guided Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0
3

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 2 publications
0
9
0
3
Order By: Relevance
“…We evaluate our regularizer using a simple GAN on a 2D mixture of 8 Gaussians arranged in a circle. This 2D baseline (Metz et al, 2017;Srivastava et al, 2017;Bang & Shim, 2018) provides a simple qualitative evaluation and demonstrates SVMax's potential in unsuper- vised learning. We leverage this simple baseline because we assume b ≥ d, which does not hold for images.…”
Section: Generative Adversarial Networkmentioning
confidence: 94%
“…We evaluate our regularizer using a simple GAN on a 2D mixture of 8 Gaussians arranged in a circle. This 2D baseline (Metz et al, 2017;Srivastava et al, 2017;Bang & Shim, 2018) provides a simple qualitative evaluation and demonstrates SVMax's potential in unsuper- vised learning. We leverage this simple baseline because we assume b ≥ d, which does not hold for images.…”
Section: Generative Adversarial Networkmentioning
confidence: 94%
“…For instance, when training on images of digits (MNIST dataset with images for 0 to 9 digits), a GAN model would sometimes generate samples only for a single digit [27]. Recent approaches [28,29,30] tackled mode-collapse in one of two different ways: (1) modifying the learning of the system to reach a better convergence point; or (2) explicitly enforcing the models to capture diverse modes or map back to the true-data distribution. Solutions to the mode-collapse problem range from designing a reconstructor network in VEEGAN [29] to matching the similarity matrix of generated samples with data [31].…”
Section: Problems In Using Gans For Design Synthesismentioning
confidence: 99%
“…As VEEGAN estimates implicit probability of real data set, it prevents mode collapsing problem and produces more realistic and diverse samples. [3] proposes a manifold guided generative adversarial network (MGGAN) for guiding a generator by adding another adversarial loss in the manifold space transformed by pre-trained encoder network. This enables the generator to learn all modes of target distribution without impairing the quality of an image.…”
Section: Related Workmentioning
confidence: 99%
“…Figure 6 plots HQS and JSD of 2D Grid test of our method at each training iteration with and without mode penalty weights w p and in equation (3). Importance weight achieves around 20% of HQS gain reaching almost 70% within 20 epochs even though it shows fluctuating values in very early iterations.…”
Section: Mixture Of Gaussiansmentioning
confidence: 99%