2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00266
|View full text |Cite
|
Sign up to set email alerts
|

MGGAN: Solving Mode Collapse Using Manifold-Guided Training

Abstract: Mode collapse is a critical problem in training generative adversarial networks. To alleviate mode collapse, several recent studies introduce new objective functions, network architectures or alternative training schemes. However, their achievement is often the result of sacrificing the image quality. In this paper, we propose a new algorithm, namely a manifold guided generative adversarial network (MGGAN), which leverages a guidance network on existing GAN architecture to induce generator learning all modes o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(36 citation statements)
references
References 18 publications
0
36
0
Order By: Relevance
“…Level 1: Traditional mode collapse [17,19,11,27,26,20,21] -strange, inappropriate patterns become a point attractor in the non-linear cyclic process [28] between the generator and the discriminator (cf. Figure 6 in [17]); Level 2: Light mode collapse [12,29,3,30,31] -patterns from the training set become wholly or partially an attractor for the generator. This is akin to lookup-table behavior made possible by the high number of parameters in a GAN.…”
Section: Avoiding Mode Collapsementioning
confidence: 99%
“…Level 1: Traditional mode collapse [17,19,11,27,26,20,21] -strange, inappropriate patterns become a point attractor in the non-linear cyclic process [28] between the generator and the discriminator (cf. Figure 6 in [17]); Level 2: Light mode collapse [12,29,3,30,31] -patterns from the training set become wholly or partially an attractor for the generator. This is akin to lookup-table behavior made possible by the high number of parameters in a GAN.…”
Section: Avoiding Mode Collapsementioning
confidence: 99%
“…This encoder structuring the information is important to "bridge the gap" between generative and discriminative tasks. It is still rare in generative models [23,9,7] and is also a direction to help to prevent mode collapse [38,1]. Unlike approaches like Engel et al [11] that assume the generative model separated the identity and attribute information in the latent space without supervision, we choose to explicitly enforce a structure in the latent space.…”
Section: Related Workmentioning
confidence: 99%
“…Identity source: Image x (1) Attributes source: Image x (2) how much of the "undesired" information has been removed through disentangling 1 . To summarize the overall quality of a model, we propose an aggregated metric that averages those 4 values.…”
Section: Disentangling Evaluationmentioning
confidence: 99%
See 2 more Smart Citations