Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411910
|View full text |Cite
|
Sign up to set email alerts
|

Towards Locality-Aware Meta-Learning of Tail Node Embeddings on Networks

Abstract: Network embedding is an active research area due to the prevalence of network-structured data. While the state of the art often learns high-quality embedding vectors for high-degree nodes with abundant structural connectivity, the quality of the embedding vectors for low-degree or tail nodes is often suboptimal due to their limited structural connectivity. While many real-world networks are long-tailed, to date little effort has been devoted to tail node embedding. In this paper, we formulate the goal of learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(33 citation statements)
references
References 29 publications
0
33
0
Order By: Relevance
“…We hope that new models will more regularly include datasets such as Hetionet during their development phase. More generally, and taking cues from the field of GNNs [25,26,44], new methods could be be developed which consider how best to learn meaningful representations for low-degree entities.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We hope that new models will more regularly include datasets such as Hetionet during their development phase. More generally, and taking cues from the field of GNNs [25,26,44], new methods could be be developed which consider how best to learn meaningful representations for low-degree entities.…”
Section: Discussionmentioning
confidence: 99%
“…The issue of non-uniform graph connectivity (typically in homogenous graphs) has begun to be studied in parallel by the field of Graph Neural Networks (GNN), where researchers have shown that models learn low-quality representations, thus making more incorrect predictions, for low-degree vertices [26,25,44]. This has also been explored in the context of homogenous graph representation learning [3] and for random walks [23,36].…”
Section: Previous Workmentioning
confidence: 99%
“…Another subtype of meta-learning called hypernetwork [15,30] uses a secondary neural network to generate the weights for the target neural network, i.e., it learns to generate different weights conditioned on different input, instead of freezing the weights for all input after training in traditional neural networks. More recently, metalearning has also been adopted on graphs for few-shot learning, such as Meta-GNN [55], GFL [50], GPN [5], RALE [25] and meta-tail2vec [26], which is distinct from inductive semi-supervised node classification as further elaborated in Section 3.1. Hypernetworkbased approaches have also emerged, such as LGNN [24] that adapts GNN weights to different local contexts, and GNN-FiLM [3] that adapts to different relations in a relational graph.…”
Section: Related Workmentioning
confidence: 99%
“…This problem is often challenging in practice because the degree distributions of most graphs follow a power law distribution and there are many nodes with very few connections. Liu et al [Liu+20] address this issue by applying meta-learning to the problem of node embedding of graphs. They set up a regression problem with a common prior to learn the node embeddings.…”
Section: Meta-learning Applied To Graph Problemsmentioning
confidence: 99%