2020
DOI: 10.3390/a13090212
|View full text |Cite
|
Sign up to set email alerts
|

Fused Gromov-Wasserstein Distance for Structured Objects

Abstract: Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 56 publications
(57 citation statements)
references
References 41 publications
0
57
0
Order By: Relevance
“…The high computational complexity limits the applications of GW discrepancy. These years, many variants of GW discrepancy have been proposed, e.g., the recursive GW discrepancy , and the sliced GW discrepancy (Vayer et al 2019b). Although these works have achieved encouraging results in many tasks, none of them consider building Gromov-Wasserstein factorization models as we do.…”
Section: Gw Discrepancy and Its Applicationsmentioning
confidence: 99%
“…The high computational complexity limits the applications of GW discrepancy. These years, many variants of GW discrepancy have been proposed, e.g., the recursive GW discrepancy , and the sliced GW discrepancy (Vayer et al 2019b). Although these works have achieved encouraging results in many tasks, none of them consider building Gromov-Wasserstein factorization models as we do.…”
Section: Gw Discrepancy and Its Applicationsmentioning
confidence: 99%
“…FG-W, unlike the G-W, combines both feature and structural information and shows its advantage in graph classification. 23 , 44 Consider two sets of tuples in space and , in space , here, and are the data points, and are their corresponding features, which are both in space and share the same dimension. With a slight abuse of notation, we will use the same symbol as Eq.…”
Section: Domain Adaptation For Fnirsmentioning
confidence: 99%
“…FG-W, unlike the G-W, combines both feature and structural information and shows its advantage in graph classification. 23,44 Consider two sets of tuples With a slight abuse of notation, we will use the same symbol as Eq. (1) to denote their empirical distribution E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 2 2 9…”
Section: Fused Gromov-wasserstein Barycentermentioning
confidence: 99%
“…Similar to the drug networks, some of the protein networks are highly dense. Implementation Details and Competing Algorithms: To measure the Gromov Wassertein Discrepancy (GWD) among layers of multiplex networks, we build our Python code on the implementation provided by [24]. We use this tool with σ = 100 to compute w ij 's for all multiplex networks.…”
Section: Drug-target Interaction Predictionmentioning
confidence: 99%