2016
DOI: 10.1609/aaai.v30i1.10091
|View full text |Cite
|
Sign up to set email alerts
|

Locally Adaptive Translation for Knowledge Graph Embedding

Abstract: Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space. Existing methods, e.g., TransE and TransH, learn embedding representation by defining a global margin-based loss function over the data. However, the optimal loss function is determined during experiments whose parameters are examined among a closed set of candidates. Moreover, embeddings over two knowledge graphs with different entities and relations share the same set … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 63 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Following the translational principles, TransG [91] is proposed to deal with multiple relation semantic problems through mining the multiple potential meanings of pairs of entities whose relations may be associated with corresponding triples in the knowledge graph. By improving the loss function of the previous translational method, TransA [92] can adaptively learn the embedding of entities and relations in a knowledge graph. Besides, the previous approaches ignored the hierarchical routine of human cognition, so TransAt [93] adopts an attention mechanism to handle the problems above.…”
Section: Translational Modelsmentioning
confidence: 99%
“…Following the translational principles, TransG [91] is proposed to deal with multiple relation semantic problems through mining the multiple potential meanings of pairs of entities whose relations may be associated with corresponding triples in the knowledge graph. By improving the loss function of the previous translational method, TransA [92] can adaptively learn the embedding of entities and relations in a knowledge graph. Besides, the previous approaches ignored the hierarchical routine of human cognition, so TransAt [93] adopts an attention mechanism to handle the problems above.…”
Section: Translational Modelsmentioning
confidence: 99%
“…Elon Musk is an entrepreneur, investor, and engineer. For tackling these challenge, knowledge graph embedding has been provided and attracted much attention, as it has the capability of knowledge graph to a dense and low dimensional, feature space [19][20][21][22][23][24][25] and it can efficiently calculate the semantic relation between entities in low dimensional space and effectively solve the problems of computational complexity and data sparsity. This method can further be used to explore new knowledge from existing facts (link prediction [19,23]), disambiguate entities (entity resolution [22,24]), extract relations (relation classification [26,27]), etc.…”
Section: Elon Musk Founderof Locatedinmentioning
confidence: 99%
“…where E(h, r, t) is the energy function of each model, γ is the margin, and (h , r , t ) denotes some "corrupted" triple which does not exist in S. Unlike aforementioned models that focus on different E(h, r, t), TransA (Jia et al 2016) introduces an adaptive local margin approach that determines γ by a closed set of entity candidates. Other similar models include RESCAL (Nickel, Tresp, and Kriegel 2011), Semantic Matching Energy (SME) (Bordes et al 2012), and the Latent Factor Model (LFM) (Jenatton et al 2012).…”
Section: Related Workmentioning
confidence: 99%