Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1221
|View full text |Cite
|
Sign up to set email alerts
|

Mapping Text to Knowledge Graph Entities using Multi-Sense LSTMs

Abstract: This paper addresses the problem of mapping natural language text to knowledge base entities. The mapping process is approached as a composition of a phrase or a sentence into a point in a multi-dimensional entity space obtained from a knowledge graph. The compositional model is an LSTM equipped with a dynamic disambiguation mechanism on the input word embeddings (a Multi-Sense LSTM), addressing polysemy issues. Further, the knowledge base space is prepared by collecting random walks from a graph enhanced with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
54
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 43 publications
(55 citation statements)
references
References 32 publications
0
54
0
1
Order By: Relevance
“…Question-answering with knowledge graphs. Our work is also related to the domain of question answering and reasoning in knowledge graphs (Das et al, 2018;Xiong et al, 2018;Hamilton et al, 2018;Xiong et al, 2017;Welbl et al, 2018;Kartsaklis et al, 2018), where either the model is provided with a knowledge graph to perform inference over or where the model must infer a knowledge graph from the text itself. However, unlike previous benchmarks in this domain-which are generally transductive and focus on leveraging and extracting knowledge graphs as a source of background knowledge about a fixed set of entities-CLUTRR requires inductive logical reasoning, where every example requires reasoning over a new set of previously unseen entities.…”
Section: Related Workmentioning
confidence: 99%
“…Question-answering with knowledge graphs. Our work is also related to the domain of question answering and reasoning in knowledge graphs (Das et al, 2018;Xiong et al, 2018;Hamilton et al, 2018;Xiong et al, 2017;Welbl et al, 2018;Kartsaklis et al, 2018), where either the model is provided with a knowledge graph to perform inference over or where the model must infer a knowledge graph from the text itself. However, unlike previous benchmarks in this domain-which are generally transductive and focus on leveraging and extracting knowledge graphs as a source of background knowledge about a fixed set of entities-CLUTRR requires inductive logical reasoning, where every example requires reasoning over a new set of previously unseen entities.…”
Section: Related Workmentioning
confidence: 99%
“…• The model not-pretrained is based on the approach of Kartsaklis et al (2018). They recently proposed a method to obtain single-sense and multi-sense vector embeddings during training (in contrast to our use of pre-trained embeddings for both).…”
Section: Baselinesmentioning
confidence: 99%
“…Machine-learned vector space model (Speer and Chin, 2016) has used which combines word embedding that is produced by GloVe (Pennington et al, 2014) and word2vec (Mikolov et al, 2013) using tightly structured semantic networks like conceptNet (Speer and Havasi, 2012). Kartsaklis et al (Kartsaklis et al, 2018) have proposed a method which maps the natural language texts to the knowledge-based entities, they have enhanced LTSM model with a dynamic disambiguation mechanism on the input word embeddings that address polysemy issue. This method has gained state of the art performance in many word-similarity evaluations.…”
Section: Related Workmentioning
confidence: 99%