Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1277
|View full text |Cite
|
Sign up to set email alerts
|

Deep Joint Entity Disambiguation with Local Neural Attention

Abstract: We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
292
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 256 publications
(293 citation statements)
references
References 21 publications
1
292
0
Order By: Relevance
“…Later, some work [11,12] proposed to convert mentions and candidate entities into a common vector space, and then disambiguated candidate entities by a scoring function (e.g., cosine similarity). In recent years, neural network-based approaches have shown considerable success in entity normalization [13][14][15]. These methods used neural architectures to learn the context representations around an entity mention and calculated the context-entity similarity scores to determine which candidate is a correct assignment.…”
Section: Introductionmentioning
confidence: 99%
“…Later, some work [11,12] proposed to convert mentions and candidate entities into a common vector space, and then disambiguated candidate entities by a scoring function (e.g., cosine similarity). In recent years, neural network-based approaches have shown considerable success in entity normalization [13][14][15]. These methods used neural architectures to learn the context representations around an entity mention and calculated the context-entity similarity scores to determine which candidate is a correct assignment.…”
Section: Introductionmentioning
confidence: 99%
“…A KGQA system as in Section 2.2 collects candidate answers. A corpus search collects snippets from top-ranking documents and annotates (Globerson et al, 2016;Ganea and Hofmann, 2017) them with KG entities. For each candidate in the union, features are collected from both KG and corpus snippets to rank them.…”
Section: Combining Corpus and Kgmentioning
confidence: 99%
“…However, Sun [1] argues that these methods are insufficient to disentangle the underlying explanatory factors of the data and proposes a method which employs a convolutional neural network (CNN) to encode the entity description. Some other methods [3,[7][8][9] try to encode the entity, based on the idea of word embedding, which only takes entity context into consideration. However, all the methods above fail to capture different information aspects of entity, which could result in a loss of information.…”
Section: Introductionmentioning
confidence: 99%
“…The semantics of mention mainly come from the mention context and other mentions in the given document. For mention context, most previous methods assume that all the words in the context have the same importance [1,4,6,8,12]. Obviously, this should be investigated carefully.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation