Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017
DOI: 10.1145/3132847.3132959
|View full text |Cite
|
Sign up to set email alerts
|

Learning Edge Representations via Low-Rank Asymmetric Projections

Abstract: We propose a new method for embedding graphs while preserving directed edge information. Learning such continuous-space vector representations (or embeddings) of nodes in a graph is an important first step for using network information (from social networks, user-item graphs, knowledge bases, etc.) in many machine learning tasks. Unlike previous work, we (1) explicitly model an edge as a function of node embeddings, and we (2) propose a novel objective, the "graph likelihood", which contrasts information fro… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
137
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 92 publications
(138 citation statements)
references
References 16 publications
1
137
0
Order By: Relevance
“…For each experiment, we report the AUC-ROC in a link prediction task performed using an ablation test described in Section 3.2. For consistency of comparison, we use the experimental settings (datasets and training/testing splits) of [2] for the baselines. Hence, the baselinesâĂŹ numbers are the same of [2] and are reported for completeness.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…For each experiment, we report the AUC-ROC in a link prediction task performed using an ablation test described in Section 3.2. For consistency of comparison, we use the experimental settings (datasets and training/testing splits) of [2] for the baselines. Hence, the baselinesâĂŹ numbers are the same of [2] and are reported for completeness.…”
Section: Resultsmentioning
confidence: 99%
“…3 During inference with the baselines, we use the embedding of a pair of node u and v to rank the likelihood of the link u, v formed by employing a scoring function that takes in the input the embeddings of the two nodes. To do so, for consistency with previous work, we used the same methodology of [2], which we summarize here. Let Y u and Y v be, respectively, the embeddings of u and v. The edge scoring function is defined as follows: for EigenMaps, it is −||Y u − Y v ||; for node2vec, we use the off-shelve binary classification LogisticRegression algorithm of sklearn to lean a model over the Hadamard product of the embeddings of the two nodes; for DNGR, we use the bottleneck layer values as the embeddings and the dot product as similarity; for Asymmetric, we use the dot product; and for M-NMF, similarly to node2vec, we train a model on the Hadamard product of the embeddings.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations