2019
DOI: 10.1007/978-3-030-30793-6_20
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Literals into Knowledge Graph Embeddings

Abstract: Knowledge graphs are composed of different elements: entity nodes, relation edges, and literal nodes. Each literal node contains an entity's attribute value (e.g. the height of an entity of type person) and thereby encodes information which in general cannot be represented by relations between entities alone. However, most of the existing embedding-or latent-feature-based methods for knowledge graph analysis only consider entity nodes and relation edges, and thus do not take the information provided by literal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 57 publications
(36 citation statements)
references
References 17 publications
0
29
0
Order By: Relevance
“…Literals (e.g. numeric values, date-time instances or other strings, etc) in KGs are conventionally ignored (Rosso et al, 2020) by embedding approaches, or are incorporated through specific means (Kristiadi et al, 2019). However, after removing statements with literals, less than 3% of the remaining statements contain any qualifier pairs.…”
Section: Wd50k Datasetmentioning
confidence: 99%
“…Literals (e.g. numeric values, date-time instances or other strings, etc) in KGs are conventionally ignored (Rosso et al, 2020) by embedding approaches, or are incorporated through specific means (Kristiadi et al, 2019). However, after removing statements with literals, less than 3% of the remaining statements contain any qualifier pairs.…”
Section: Wd50k Datasetmentioning
confidence: 99%
“…Kristiadi et al [38] considered the semantic information carried by the literal meanings of entity names in knowledge graphs, and proposed a new representation learning mechanism LiteralE (See Fig. 5).…”
Section: Information Fusion Within Knowledge Graphmentioning
confidence: 99%
“…TEKE [49] defines context vectors of entities and relations, and combines context vectors into traditional models, e.g., TransE, Xie et al [51] and Xu et al [52] encode textual literals by convolutional and recurrent neural networks. LiteralE [24] replaces the original entity embeddings of conventional loss functions with literal-enriched vectors, which are defined by learnable parametrized functions. SimplE [23] incorporates certain types of background knowledge into the model by weight tying.…”
Section: Kg Embedding Techniquesmentioning
confidence: 99%
“…How to appropriately incorporate massive literals attached to different entities of educational KGs into embedding representations? We have noticed a model named LiteralE [24] which utilizes literals for embedding learning by a simple portable module. However, LiteralE only considers numerical literals which are apparently a very limited part of literals of educational KGs.…”
Section: Introductionmentioning
confidence: 99%