2020
DOI: 10.3390/app10113964
|View full text |Cite
|
Sign up to set email alerts
|

Learning Translation-Based Knowledge Graph Embeddings by N-Pair Translation Loss

Abstract: Translation-based knowledge graph embeddings learn vector representations of entities and relations by treating relations as translation operators over the entities in an embedding space. Since the translation is represented through a score function, translation-based embeddings are trained in general by minimizing a margin-based ranking loss, which assigns a low score to positive triples and a high score to negative triples. However, this type of embedding suffers from slow convergence and poor local optima b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…To better model complex relations, some models [4,13] allow the entity or the relation to have distinct representations. However, Song et al [15] argue that translational distance models suffer from slow convergence and poor local optimality. Moreover, these shallow models are limited to their expressiveness [16].…”
Section: Triple-level Learningmentioning
confidence: 99%
“…To better model complex relations, some models [4,13] allow the entity or the relation to have distinct representations. However, Song et al [15] argue that translational distance models suffer from slow convergence and poor local optimality. Moreover, these shallow models are limited to their expressiveness [16].…”
Section: Triple-level Learningmentioning
confidence: 99%
“…A similar approach is TranSparse [5], which takes the complexity of the relationship into account in the design of the relational mapping matrix. Song et al [15] argue that the loss of the existing translationbased embedding model uses only a pair of positive and negative triples in one update of the learning parameters, which suffers from slow convergence and poor local optimality. Therefore, they proposed the N-pair translation loss that considers multiple negative triples at one update.…”
Section: Related Workmentioning
confidence: 99%
“…Differences between the various embedding algorithms are related to three aspects: (i) how they represent entities and relations, (ii) how they define the scoring function, and (iii) how they optimize the ranking criterion that maximizes the global plausibility of the existing triples [16]. Some of the embedding models that show state-of-the-art performance in knowledge graph completion relate to translation-based models that treat relations as translation operators over the entities in an embedding space [42] (e.g., TransE, TransR, TransH, UM). Alternatively, semantic matching models that use semantic similarity between entities and relations in the embedding space are commonly used for the task (e.g., DistMult, RESCAL, and ERMLP).…”
Section: Introductionmentioning
confidence: 99%