2019
DOI: 10.48550/arxiv.1901.06003
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gromov-Wasserstein Learning for Graph Matching and Node Embedding

Hongteng Xu,
Dixin Luo,
Hongyuan Zha
et al.

Abstract: A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes. Using Gromov-Wasserstein discrepancy, we measure the dissimilarity between two graphs and find their correspondence, according to the learned optimal transport. The node embeddings associated with the two graphs are learned under the guidance of the optimal transport, the distance of which not only reflects the topological structure of each graph but also yields … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(22 citation statements)
references
References 36 publications
1
21
0
Order By: Relevance
“…Although it is an unsupervised method, it has achieved successes in multi-modal learning [9], neural language processing [52], and 3D shape correspondence [17,30]. Among graph matching approaches, it is common to perform alignment by Wasserstein distance (WD) [37], Gromov-Wasserstein distance [38], and them both [9,51]. Recent graph-matching studies combine WD and GWD, and learn the shared correspondence between WD and GWD, for improving effectiveness [9,51].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Although it is an unsupervised method, it has achieved successes in multi-modal learning [9], neural language processing [52], and 3D shape correspondence [17,30]. Among graph matching approaches, it is common to perform alignment by Wasserstein distance (WD) [37], Gromov-Wasserstein distance [38], and them both [9,51]. Recent graph-matching studies combine WD and GWD, and learn the shared correspondence between WD and GWD, for improving effectiveness [9,51].…”
Section: Related Workmentioning
confidence: 99%
“…Among graph matching approaches, it is common to perform alignment by Wasserstein distance (WD) [37], Gromov-Wasserstein distance [38], and them both [9,51]. Recent graph-matching studies combine WD and GWD, and learn the shared correspondence between WD and GWD, for improving effectiveness [9,51]. However, these methods have a relatively high computational complexity of 𝑂 (𝑛 3 ), and they are not scalable for large graphs.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A (T ) are calculated based on the updated base intensities and infectivity matrices. Inspired by the work in [6,10], we apply a proximal gradient method to solve (6) iteratively. Given current optimal transport T (n) , we add a proximal term as the regularizer of ( 6):…”
Section: Updating Hawkes Processesmentioning
confidence: 99%
“…In particular, improvements of the computational strategies to efficiently obtain Wasserstein distances [1,8] have led to many applications in machine learning that use it for various purposes, ranging from generative models [2] to new loss functions [14]. For applications to graphs, notions from optimal transport were used to tackle the graph alignment problem [44]. In this work, we provide the theoretical foundations of our method, then define a new graph kernel formulation and finally present successful experimental results.…”
Section: Introductionmentioning
confidence: 99%

Wasserstein Weisfeiler-Lehman Graph Kernels

Togninalli,
Ghisu,
Llinares-López
et al. 2019
Preprint