2019
DOI: 10.3390/info10020046
|View full text |Cite
|
Sign up to set email alerts
|

Attention-Based Joint Entity Linking with Entity Embedding

Abstract: Entity linking (also called entity disambiguation) aims to map the mentions in a given document to their corresponding entities in a target knowledge base. In order to build a high-quality entity linking system, efforts are made in three parts: Encoding of the entity, encoding of the mention context, and modeling the coherence among mentions. For the encoding of entity, we use long short term memory (LSTM) and a convolutional neural network (CNN) to encode the entity context and entity description, respectivel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…When it comes to deep learning approaches, Phan et al (2017) employed Long Short-Term Memory (LSTM) networks with attention mechanism. Ganea and Hofmann (2017) used attention mechanism over local context windows to spot important words and Liu et al (2019) expanded this to the important spans with conditional random fields. While these approaches used neural networks with attention mechanism to model the named entities and mentions together and pick the best matching candidate entity, we used a simple LSTM to model the type prediction of the mentions only and then used that information as an extra clue for a simple feed-forward neural network-based ranking model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…When it comes to deep learning approaches, Phan et al (2017) employed Long Short-Term Memory (LSTM) networks with attention mechanism. Ganea and Hofmann (2017) used attention mechanism over local context windows to spot important words and Liu et al (2019) expanded this to the important spans with conditional random fields. While these approaches used neural networks with attention mechanism to model the named entities and mentions together and pick the best matching candidate entity, we used a simple LSTM to model the type prediction of the mentions only and then used that information as an extra clue for a simple feed-forward neural network-based ranking model.…”
Section: Related Workmentioning
confidence: 99%
“…WordNet uses the synset records to define a hypernym hierarchy between them to reflect which synset is a type of which other. YAGO v3.1 (Mahdisoltani et al 2015) uses those synsets as category labels to represent certain types of named entities, such as person, musician, and city. Moreover, YAGO extends the WordNet synsets with its own specialized synsets, prefixed "wikicat" (e.g., wordnet_person_100007846 is a hypernym of wikicat_Men in the YAGO synset taxonomy).…”
Section: Input To Get E Syn W/word2vecfmentioning
confidence: 99%
“…These neural-network-based entity linking models [17] [18] [19] can use the surrounding context information to address the linking issue. The global consistency information is also introduced together with the local model for better disambiguation [20][21] [22][23] [24][25] [26][27] [28] These models improve the performance of term representation and linking. However, these methods represent mentions and candidate entities separately, ignoring the interaction information between them.…”
Section: Introductionmentioning
confidence: 99%