2018
DOI: 10.48550/arxiv.1802.09612
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MILE: A Multi-Level Framework for Scalable Graph Embedding

Abstract: Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework -a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(36 citation statements)
references
References 25 publications
0
36
0
Order By: Relevance
“…Graph reduction with spectral approximation guarantees are studied in [18,23,27]. Recently, graph coarsening has been applied to speedup graph embedding algorithms [12,16,24]. As far as we are aware, this is the first work applying graph coarsening to speedup the training of GNNs in the semi-supervised setting.…”
Section: Related Workmentioning
confidence: 99%
“…Graph reduction with spectral approximation guarantees are studied in [18,23,27]. Recently, graph coarsening has been applied to speedup graph embedding algorithms [12,16,24]. As far as we are aware, this is the first work applying graph coarsening to speedup the training of GNNs in the semi-supervised setting.…”
Section: Related Workmentioning
confidence: 99%
“…REFINE also outperforms the initial RBQR (Algorithm 1) by a large margin. We also conduct the comparison with fast network embedding methods without matrix factorization [1,13,3]. As shown in Table 4, our method outperforms other methods significantly.…”
Section: Performancementioning
confidence: 99%
“…Afterward, network embedding methods are applied to learn the representations of supernodes, and then with the learned representation as the initial value of the supernodes' constituent nodes, the embedding methods are run over finer-grained subgraphs again. Compared with HARP, MILE [34] implements embeddings refinement to learn better representations for nodes in finer-grained networks with lower computational cost and higher flexibility. While HARP and MILE still follow the setting of embedding lookup as previous work did, our framework manages to reduce the memory usage as well as improve the scalability.…”
Section: Network Representation Learningmentioning
confidence: 99%
“…HARP [33] and MILE [34] have used Graph Coarsening to find a smaller network which approximates the global structure of its input and learn coarse embeddings from the small network, which serve as good initializations for learning representation in the input network. Graph Coarsening coarsens a network without counting the number of origin nodes that belong to a coarsen group.…”
Section: Graph Partitioningmentioning
confidence: 99%
See 1 more Smart Citation