2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00367
|View full text |Cite
|
Sign up to set email alerts
|

Generative Modeling Using the Sliced Wasserstein Distance

Abstract: Generative Adversarial Nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to be hard to optimize and often not stable. While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein distance rather than the Jenson-Shannon divergence. Here, we consider an alternative for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
145
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 135 publications
(147 citation statements)
references
References 26 publications
(38 reference statements)
2
145
0
Order By: Relevance
“…In practice, Deshpande et al [8] approximate the sliced Wasserstein-2 distance between the distributions by using samples D ∼ P d , F ∼ P g , and a finite number of random Gaussian directions, replacing the integration over Ω with a summation over a randomly chosen set of unit vec-torsΩ ∝ N (0, I), where '∝' is used to indicate normalization to unit length. With P g (and hence, F) being implicitly parametrized by θ g , [8] uses the following program for generative modeling:…”
Section: Introductionmentioning
confidence: 99%
“…In practice, Deshpande et al [8] approximate the sliced Wasserstein-2 distance between the distributions by using samples D ∼ P d , F ∼ P g , and a finite number of random Gaussian directions, replacing the integration over Ω with a summation over a randomly chosen set of unit vec-torsΩ ∝ N (0, I), where '∝' is used to indicate normalization to unit length. With P g (and hence, F) being implicitly parametrized by θ g , [8] uses the following program for generative modeling:…”
Section: Introductionmentioning
confidence: 99%
“…A. Laplacian SWD SWD, or Sliced Wasserstein Distance, is a metric that measures the overall deviation between the original training dataset and the generator-synthesized dataset [46]. The standard Wasserstein Distance is difficult to compute on such high-dimensional input, especially in images due to the 3color channel RGB values attached to each tensor.…”
Section: Resultsmentioning
confidence: 99%
“…The standard Wasserstein Distance is difficult to compute on such high-dimensional input, especially in images due to the 3color channel RGB values attached to each tensor. Each image is turned into a Laplacian pyramid [46], a multi-scaled data structure with layers representing each resolution that the image was generated at. To cross resolutions, there are upsampling, downsampling, and blurring functions applied to the original large image, which are then all compiled into one file.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the thought-provoking approaches is via slicing high-dimensional distributions over their onedimensional marginals and comparing their marginal distributions Kolouri et al (2019b); Nadjahi et al (2019b). The idea of slicing distributions is related to the Radon transform and has been successfully used in, for instance, sliced-Wasserstein distances in various applications Rabin et al (2011); Kolouri et al (2016); Carriere et al (2017); Deshpande et al (2018); Kolouri et al (2018); Nadjahi et al (2019a). More recently, Kolouri et al (2019a) extended the idea of linear slices of distributions, used in sliced-Wasserstein distances, to non-linear slicing of high-dimensional distributions, which is rooted in the generalized Radon transform.…”
Section: Introductionmentioning
confidence: 99%