2021
DOI: 10.1049/cvi2.12066
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid attention mechanism for few‐shot relational learning of knowledge graphs

Abstract: Few-shot knowledge graph (KG) reasoning is the main focus in the field of knowledge graph reasoning. In order to expand the application fields of the knowledge graph, a large number of studies are based on a large number of training samples. However, we have learnt that there are actually many missing relationships or entities in the knowledge graph, and in most cases, there are not many training instances when implementing new relationships. To tackle it, in this study, the authors aim to predict a new entity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…However, many problems cannot be divided into a single problem. Subdividing a problem into subproblems may cause information loss between the subproblems [9]. Figure 1 shows the structures of single-task and multi-task learning.…”
Section: Research On Multi-task Learning Urban Spatial Quality Attrib...mentioning
confidence: 99%
“…However, many problems cannot be divided into a single problem. Subdividing a problem into subproblems may cause information loss between the subproblems [9]. Figure 1 shows the structures of single-task and multi-task learning.…”
Section: Research On Multi-task Learning Urban Spatial Quality Attrib...mentioning
confidence: 99%
“…The existing work [32,33] has proved that constructing model modeling can improve the accuracy of actual prediction. HAF [8] has proved that a one-step neighborhood is helpful for entity prediction, but this framework does not consider the feature influence of the same relational attributes on the source entity under different training tasks. To address this issue, we design an adaptive weighted entity augmentation encoder.…”
Section: Entity-enhanced Encodermentioning
confidence: 99%
“…Q, K, and V represent the three-element Query, Key, and Value of the self-attention mechanism. Through the above process, this module can obtain the weighted implicit features of the entity's one-hop neighbors through the Transformer encoder, as shown in Equation (8).…”
Section: Entity-enhanced Encodermentioning
confidence: 99%
See 1 more Smart Citation
“…Relying on abundant annotated data, deep learning techniques [1][2][3] have achieved significant breakthroughs in computer vision [4,5] with their excellent ability to learn feature representations. However, in some applications, collecting such large scales of annotated training data is often laborious, expensive, or even impossible.…”
Section: Introductionmentioning
confidence: 99%