2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00383
|View full text |Cite
|
Sign up to set email alerts
|

Sliced Wasserstein Generative Models

Abstract: In generative modeling, the Wasserstein distance (WD) has emerged as a useful metric to measure the discrepancy between generated and real data distributions. Unfortunately, it is challenging to approximate the WD of highdimensional distributions. In contrast, the sliced Wasserstein distance (SWD) factorizes high-dimensional distributions into their multiple one-dimensional marginal distributions and is thus easier to approximate.In this paper, we introduce novel approximations of the primal and dual SWD. Inst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
77
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(77 citation statements)
references
References 16 publications
0
77
0
Order By: Relevance
“…3.1 the sample complexity of the Wasserstein and sliced Wasserstein distances. We show that for a certain class of distributions the Wasserstein distance has an exponential sample complexity, while the sliced Wasserstein distance [8,34] has a polynomial sample complexity.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…3.1 the sample complexity of the Wasserstein and sliced Wasserstein distances. We show that for a certain class of distributions the Wasserstein distance has an exponential sample complexity, while the sliced Wasserstein distance [8,34] has a polynomial sample complexity.…”
Section: Introductionmentioning
confidence: 99%
“…We first apply this intuition to analyze the recently proposed sliced Wasserstein distance GAN, which is based on the average Wasserstein distance of the projected versions of two distributions along a few randomly picked directions [8,20,34]. We prove that the sliced Wasserstein distance is generalizable for Gaussian distributions (i.e., it has polynomial sample complexity), while Wasserstein distance is not, thus partially explaining why [8,20,34] may exhibit better behavior than the Wasserstein distance [2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Self-training has been exploited in various tasks such as semi-supervised learning [25,21], domain adaptation [38,58], and noisy label learning [40,35]. [41,44,34,47,24,13] adopted adversarial training at feature level to learn domain-invariant features to reduce the discrepancy across domains. [18,8,27] applied adversarial training at the image level to make features invariant to illumination, color and other style factors.…”
Section: Related Workmentioning
confidence: 99%
“…[47] applies adversarial loss directly on learned segmentation features maps. Another examples are [34], [50], [4] and [54], [5].…”
Section: Related Workmentioning
confidence: 99%