2016
DOI: 10.48550/arxiv.1611.04076
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Least Squares Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
87
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 109 publications
(87 citation statements)
references
References 13 publications
0
87
0
Order By: Relevance
“…• several more recent GAN architectures that have been applied to the t t dataset: Wasserstein GAN (WGAN) [26], WGAN with Gradient Penalty (WGAN-GP) [27], Least Squares GAN (LS-GAN) [28], Maximum Mean Discrepancy GAN (MMDGAN) [29] and…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…• several more recent GAN architectures that have been applied to the t t dataset: Wasserstein GAN (WGAN) [26], WGAN with Gradient Penalty (WGAN-GP) [27], Least Squares GAN (LS-GAN) [28], Maximum Mean Discrepancy GAN (MMDGAN) [29] and…”
Section: Methodsmentioning
confidence: 99%
“…where a,b and c are constants that need to be fixed and must satisfy b − a = 2 and b − c = 1 for effectively minimizing the Pearson χ 2 divergence [28]. By assuming an optimal discriminator D * , choosing a = −1, b = 1, c = 0 and adding a term 1 2 E x∼p d (D(x) − c) 2 to eq.…”
Section: B Generative Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Such a two-player game is actually optimized when they reach the Nash-Equilibrium point, that is, the discriminator can not tell whether an image is real or not. Many following works improve generative adversarial networks by better loss, training skills and evaluating metrics [1,13,19,34,16,24,27,35]. The objective of VAEs is to maximize the variational lower bound of log-likelihood of target data points.…”
Section: Three Basic Generative Modelsmentioning
confidence: 99%
“…Generative adversarial networks (GANs) [1,11,27,32] and variational auto-encoders (VAEs) [18] are two types of the most popular generative models due to their solid theoretic foundation and excellent results. Also, the performance of conditional image synthesis by these models improves rapidly with the fast development of deep learning.…”
Section: Introductionmentioning
confidence: 99%