2021
DOI: 10.1109/access.2021.3055529
|View full text |Cite
|
Sign up to set email alerts
|

RAGAT: Relation Aware Graph Attention Network for Knowledge Graph Completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…GraphSAGE minimizes information loss by concatenating vectors of neighbors rather than summing them into a single value in the process of neighbor aggregation [40,41]. GAT utilizes the concept of attention to individually deal with the importance of neighbor nodes or relations [21,[42][43][44][45][46][47]. Since each model has different characteristics and advantages, suitable models for KG alignment differ depending on the components and the topological structure of the KG.…”
Section: Knowledge Graph Alignmentmentioning
confidence: 99%
“…GraphSAGE minimizes information loss by concatenating vectors of neighbors rather than summing them into a single value in the process of neighbor aggregation [40,41]. GAT utilizes the concept of attention to individually deal with the importance of neighbor nodes or relations [21,[42][43][44][45][46][47]. Since each model has different characteristics and advantages, suitable models for KG alignment differ depending on the components and the topological structure of the KG.…”
Section: Knowledge Graph Alignmentmentioning
confidence: 99%
“…However, using multi-head attention can have large size of parameters. To address this issue, relation aware graph attention network (RAGAT) [47] is proposed which de nes relation aware message passing functions parameterized by relation speci c network parameters and employs averaging instead of concatenating n attention head. To validate the results of this model, the decoder (scoring function) uses two different decoders: ConvE and InteractE.…”
Section: Attention Neural Network Modelsmentioning
confidence: 99%
“…Graph neural networks (GNNs) learn a lower-dimensional representation for a node in a vector space by aggregating the information from its neighbors using discrete hidden layers. Then the embedding can be used for downstream tasks such as node classification (Atwood and Towsley, 2015), link prediction (Zhang and Chen, 2018;Li et al, 2020), and knowledge completion (Liu et al, 2021b).…”
Section: Introductionmentioning
confidence: 99%