2020
DOI: 10.1137/19m1301047
|View full text |Cite
|
Sign up to set email alerts
|

A Wasserstein-Type Distance in the Space of Gaussian Mixture Models

Abstract: In this paper we introduce a Wasserstein-type distance on the set of Gaussian mixture models. This distance is defined by restricting the set of possible coupling measures in the optimal transport problem to Gaussian mixture models. We derive a very simple discrete formulation for this distance, which makes it suitable for high dimensional problems. We also study the corresponding multimarginal and barycenter formulations. We show some properties of this Wasserstein-type distance, and we illustrate its practic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
51
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(52 citation statements)
references
References 30 publications
1
51
0
Order By: Relevance
“…Next, we study the practical convergence of projected stochastic gradient descent (Algorithm 3). Using the fact that the true Wasserstein barycenter of one-dimensional Gaussian measures has closed-form expression for the mean and the variance [22], we study the convergence to the true barycenter of the generated truncated Gaussian measures. Figure 1 illustrates the convergence in the 2-Wasserstein distance within 40 s.…”
Section: The Total Complexity Of Algorithm 3 Ismentioning
confidence: 99%
“…Next, we study the practical convergence of projected stochastic gradient descent (Algorithm 3). Using the fact that the true Wasserstein barycenter of one-dimensional Gaussian measures has closed-form expression for the mean and the variance [22], we study the convergence to the true barycenter of the generated truncated Gaussian measures. Figure 1 illustrates the convergence in the 2-Wasserstein distance within 40 s.…”
Section: The Total Complexity Of Algorithm 3 Ismentioning
confidence: 99%
“…So, it is worth exploring noise distributions other than the exponential mechanism. For P X|S (•|s, ρ) being Gaussian distribution or Gaussian mixture model for all s, the Kantorovich optimal transport plan π * is fully characterized by the mean and covariance matrix (Takatsu, 2010;Delon and Desolneux, 2020). Since Gaussian models are widely used in machine learning, it is of interest if the Kantorovich mechanism can be apply to the privacy-preserving pattern recognition problems.…”
Section: Discussionmentioning
confidence: 99%
“…We also provide further justification why the Wasserstein Distance is a sensible metric to use. It is well known that a mixture of Gaussians can converge in distribution to any continuous random variable, however existing work has shown that a mixture of Gaussians can approximate any discrete distribution in the Wasserstein Distance arbitrarily well [20].…”
Section: Wasserstein Distancementioning
confidence: 99%