Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/544
|View full text |Cite
|
Sign up to set email alerts
|

Fast Network Embedding Enhancement via High Order Proximity Approximation

Abstract: Many Network Representation Learning (NRL) methods have been proposed to learn vector representations for vertices in a network recently. In this paper, we summarize most existing NRL methods into a unified two-step framework, including proximity matrix construction and dimension reduction. We focus on the analysis of proximity matrix construction step and conclude that an NRL method can be improved by exploring higher order proximities when building the proximity matrix. We propose Network Embedding Update (N… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
95
0
2

Year Published

2018
2018
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 150 publications
(98 citation statements)
references
References 20 publications
1
95
0
2
Order By: Relevance
“…The representative works are DeepWalk [23] and node2vec [8], which use first-order and second-order random walks to guide the network embedding respectively. Later a lot of analysis was conducted to build the relationship between the random walk based methods and the matrix factorization based methods, which shows the fact that DeepWalk uses the embedding vectors to learn from the truncated average commuting time between nodes [5,31]. This also implies that when performing random walk, to obtain a set of stable embedding vectors, one needs to run sufficient steps of random walk to estimate the stationary distributions of hitting time and commuting time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The representative works are DeepWalk [23] and node2vec [8], which use first-order and second-order random walks to guide the network embedding respectively. Later a lot of analysis was conducted to build the relationship between the random walk based methods and the matrix factorization based methods, which shows the fact that DeepWalk uses the embedding vectors to learn from the truncated average commuting time between nodes [5,31]. This also implies that when performing random walk, to obtain a set of stable embedding vectors, one needs to run sufficient steps of random walk to estimate the stationary distributions of hitting time and commuting time.…”
Section: Related Workmentioning
confidence: 99%
“…For example, in DeepWalk [23], it first samples random walk paths from the network following the transition probabilities. Then it performs the Skip-gram algorithm [19] on the random walk paths to learn node embeddings to preserve the truncated average commuting time between nodes [5,31]. Node2vec [8] further generalizes this idea to introduce a second-order random walk to balance the breadth and depth of the search to explore neighborhoods of nodes.…”
Section: Introductionmentioning
confidence: 99%
“…For example, LP [22] and ICA [15] estimate labels of unlabeled nodes using the local inference. Recently, graph embedding methods have been proposed to learn low-dimensional representations of nodes by leveraging many relational properties such as random walk (e.g., Node2Vec [8]), high-order paths (e.g., GraRep [2] and NEU [19]), structural similarities (e.g., Struc2Vec [13]).…”
Section: Related Workmentioning
confidence: 99%
“…Due to its ability in facilitating graph analysis, graph representation learning has drawn researchers' attentions from machine learning and data mining fields [5]- [7]. Most of these works are focusing on static networks, such as explicitly preserving local or high-order proximity [8]- [10], learning representations using truncated walks [11], [12]; using matrix factorization technique to obtain latent vectors [6], [13], [14], incorporating heterogeneous information [5], [7], etc. Most of these methods are lack of theoretical support to produce reliable representation results.…”
Section: Extraction Decouplingmentioning
confidence: 99%