2020
DOI: 10.48550/arxiv.2008.09165
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Linear Optimal Transport Embedding: Provable Wasserstein classification for certain rigid transformations and perturbations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…, see [20]. Only for X = R, we have that (P 2 (R), W 2 ) is flat, meaning that LOT σ and Wasserstein distance coincide, i.e.…”
Section: Linear Optimal Transportmentioning
confidence: 97%
See 1 more Smart Citation
“…, see [20]. Only for X = R, we have that (P 2 (R), W 2 ) is flat, meaning that LOT σ and Wasserstein distance coincide, i.e.…”
Section: Linear Optimal Transportmentioning
confidence: 97%
“…The determination of conditions, which allow the transformation of signals created by algebraic generative models into convex sets by applying LOT, has been addressed in [1]. In [20], the authors characterize settings in which LOT embeds families of distributions into a space in which they are linearly separable and provide conditions such that the LOT distance between two measures is nearly isometric to their 2-Wasserstein distance. Note that a linear version of the Hellinger-Kantorovich distance is also available [7].…”
Section: Introductionmentioning
confidence: 99%
“…They are particularly effective for classifying data emanating from physical processes where signal classes can be modeled as observations of a particular set of template signals under some unknown, possibly random, temporal deformation or transportation (Park et al, 2018;Shifat-E-Rabbi et al, 2021;Rubaiyat et al, 2022b). Efforts have been made to explain the success of these models by understanding the geometry of the transform embedding space (Park et al, 2018;Aldroubi et al, 2021;Moosmüller & Cloninger, 2020), where embedding properties and conditions when the data class becomes convex and linearly separable in the transform space are studied. In a nutshell, the templatedeformation-based generative models capture the nonlinear structure of signals and the nonlinear transport transforms render signal classes that are nonlinear and non-convex into convex sets in transform embedding space (see Figure 1).…”
Section: Introductionmentioning
confidence: 99%
“…The motivation of LGW is based on linear optimal transport (LOT), which was introduced by Wang et al [8]. Since its introduction, LOT has been successfully applied for several tasks in nuclear structure-based pathology [9], parametric signal estimation [10], signal and image classification [11,12,13,14], modeling of turbulences [15], cancer detection [16,17,18], Alzheimer disease detection [19], vehicle-type recognition [20] as well as for demultiplexing vortex modes in optical communications [21]. Both LOT and LGW make use of the geometrical structure of the (Gromov-)Wasserstein space and compute distances in the tangent space with respect to some a priori fixed reference.…”
Section: Introductionmentioning
confidence: 99%