2017
DOI: 10.48550/arxiv.1705.07164
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Relaxed Wasserstein with Applications to GANs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…Broadly speaking, previous work in GANs study three main properties: (1) Stability where the focus is on the convergence of the commonly used alternating gradient descent approach to global/local optimizers (equilibriums) for GAN's optimization (e.g., [6,[10][11][12][13], etc. ), (2) Formulation where the focus is on designing proper loss functions for GAN's optimization (e.g., WGAN+Weight Clipping [4], WGAN+Gradient Penalty [5], GAN+Spectral Normalization [14], WGAN+Truncated Gradient Penalty [15], relaxed WGAN [16], f -GAN [17], MMD-GAN [18,19] , Least-Squares GAN [20], Boundary equilibrium GAN [21], etc. ), and (3) Generalization where the focus is on understanding the required number of samples to learn a probability model using GANs (e.g., [22]).…”
Section: Prior Workmentioning
confidence: 99%
“…Broadly speaking, previous work in GANs study three main properties: (1) Stability where the focus is on the convergence of the commonly used alternating gradient descent approach to global/local optimizers (equilibriums) for GAN's optimization (e.g., [6,[10][11][12][13], etc. ), (2) Formulation where the focus is on designing proper loss functions for GAN's optimization (e.g., WGAN+Weight Clipping [4], WGAN+Gradient Penalty [5], GAN+Spectral Normalization [14], WGAN+Truncated Gradient Penalty [15], relaxed WGAN [16], f -GAN [17], MMD-GAN [18,19] , Least-Squares GAN [20], Boundary equilibrium GAN [21], etc. ), and (3) Generalization where the focus is on understanding the required number of samples to learn a probability model using GANs (e.g., [22]).…”
Section: Prior Workmentioning
confidence: 99%
“…Accordingly, the representation (9) to (13) applies to (such kind of) CBD. The corresponding special case of ( 10) is called "a relaxed Wasserstein distance (parameterized by φ) between È and É" in the recent papers of Lin et al [20] and Guo et al [12] for a restrictive setup where È and É are supposed to have compact support; the latter two references do not give connections to divergences of quantile functions, but substantially concentrate on applications to topic sparsity for analyzing usergenerated web content and social media, respectively, to Generative Adversarial Networks (GANs). is quasi-antitone on ]0, ∞[×]0, ∞[ if the generator function φ is strictly convex and thrice continuously differentiable on ]0, ∞[ (and hence, c is obsolete) and the so-called scale connector W is twice continuously differentiable such that -on ]0, ∞[×]0, ∞[ -Υ φ,c,W,W is twice continuously differentiable and ∂ 2 Υ φ,c,W,W (u,v) ∂u∂v ≤ 0 (an explicit formula of the latter is given in the appendix of Kißlinger & Stummer [16]).…”
Section: New Optimal Transport Problemsmentioning
confidence: 99%
“…In essence, GANs are minimizing proper divergences between true distribution and the generated distribution: for instance, [21] uses f-divergence, [27] explores scaled Bregman divergence, [1] adopts Wasserstein-1 distance, and [14] proposes relaxed Wasserstein divergence.…”
Section: Review: Gans and Mfgs As Minimax Gamesmentioning
confidence: 99%