2017
DOI: 10.48550/arxiv.1706.07845
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HARP: Hierarchical Representation Learning for Networks

Abstract: We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higherorder structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial represent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(26 citation statements)
references
References 5 publications
(6 reference statements)
0
26
0
Order By: Relevance
“…In this paper, our hierarchy is formed by multiple k-NN graphs recurrently built with clustering and node aggregation, which are learnt from the meta-training set. Hierarchical representation has also been explored in the graph representation learning literature [63,9,4,19,18,26]. There, the focus is to learn a stronger feature representation to classify graph [63] or input nodes [18] into a closed set of class labels.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…In this paper, our hierarchy is formed by multiple k-NN graphs recurrently built with clustering and node aggregation, which are learnt from the meta-training set. Hierarchical representation has also been explored in the graph representation learning literature [63,9,4,19,18,26]. There, the focus is to learn a stronger feature representation to classify graph [63] or input nodes [18] into a closed set of class labels.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…There are number of other studies which considers context of links and nodes (Tu, Liu, Liu, & Sun, 2017), node attributes (Huang, Li, & Hu, 2017); or focuses on developing a meta-strategy (Chen, Perozzi, Hu, & Skiena, 2017). On the other hand, Tu, Zhang, Liu & Sun (2016) and Wang, Cui, & Zhu (2016) develop semi-supervised representation learning algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…Closely related to our model, HARP [33] first coarsens the graph, and after that, the new graph consists of supernodes. Afterward, network embedding methods are applied to learn the representations of supernodes, and then with the learned representation as the initial value of the supernodes' constituent nodes, the embedding methods are run over finer-grained subgraphs again.…”
Section: Network Representation Learningmentioning
confidence: 99%
“…HARP [33] and MILE [34] have used Graph Coarsening to find a smaller network which approximates the global structure of its input and learn coarse embeddings from the small network, which serve as good initializations for learning representation in the input network. Graph Coarsening coarsens a network without counting the number of origin nodes that belong to a coarsen group.…”
Section: Graph Partitioningmentioning
confidence: 99%