Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467247
|View full text |Cite
|
Sign up to set email alerts
|

Relational Message Passing for Knowledge Graph Completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 91 publications
(55 citation statements)
references
References 18 publications
0
55
0
Order By: Relevance
“…For example, for link prediction, GNN models, such as GCN (Kipf & Welling, 2017), GAE (Kipf & Welling, 2016) may perform even worse than some simple heuristics such as common neighbors and Adamic Adar (Liben-Nowell & Kleinberg, 2007) (see the performance comparison over the networks Collab and PPA in Open Graph Benchmark (OGB) (Hu et al, 2020)). Similar issues widely appear in node-set-based tasks such as network motif prediction (Liu et al, 2022;Besta et al, 2021), motif counting (Chen et al, 2020), relation prediction (Wang et al, 2021a;Teru et al, 2020) and temporal interaction prediction (Wang et al, 2021b), which posts a big concern for applying GNNs to these relevant real-world applications.…”
Section: Introductionmentioning
confidence: 81%
“…For example, for link prediction, GNN models, such as GCN (Kipf & Welling, 2017), GAE (Kipf & Welling, 2016) may perform even worse than some simple heuristics such as common neighbors and Adamic Adar (Liben-Nowell & Kleinberg, 2007) (see the performance comparison over the networks Collab and PPA in Open Graph Benchmark (OGB) (Hu et al, 2020)). Similar issues widely appear in node-set-based tasks such as network motif prediction (Liu et al, 2022;Besta et al, 2021), motif counting (Chen et al, 2020), relation prediction (Wang et al, 2021a;Teru et al, 2020) and temporal interaction prediction (Wang et al, 2021b), which posts a big concern for applying GNNs to these relevant real-world applications.…”
Section: Introductionmentioning
confidence: 81%
“…This corresponds to our MLM task, where masked tokens in Segment A can be predicted using Segment B (a linked document on the graph), and vice versa. In link prediction (Bordes et al, 2013;Wang et al, 2021a), the task is to predict the existence or type of an edge between two nodes. This corresponds to our DRP task, where we predict if the given pair of text segments are linked (edge), contiguous (self-loop edge), or random (no edge).…”
Section: Pretraining Tasksmentioning
confidence: 99%
“…In describing the post-training process, we have assumed that the original model only leverages individual facts, which is by far the dominant approach in literature. A few recent methods can also exploit contextual information, such as paths [18,55], temporal details [29], or types [59]. While our current implementation focuses on fact-based models, the formulation of Kelpie can indeed be applied to these contextual models too: the Pre-Filter and the Explanation Builder, which are model-independent, would just work as usual, and the Relevance Engine could easily include contextual information in its post-training processes.…”
Section: Relevance Enginementioning
confidence: 99%