2018
DOI: 10.48550/arxiv.1808.07699
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-End Neural Entity Linking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 0 publications
0
25
0
Order By: Relevance
“…We experiment with two settings zero-shot assuming we know the location of the entities and the full fine-tuning setting following the exact methodology proposed in De Cao et al (2020). Specifically for the end-to-end Entity Linking, we aim to reproduce the setting of Kolitsas et al (2018). We evaluate using the aforementioned InKB micro-F 1 with the same defined in-domain and out-of-domain datasets as described by De Cao et al…”
Section: Entity Linkingmentioning
confidence: 99%
“…We experiment with two settings zero-shot assuming we know the location of the entities and the full fine-tuning setting following the exact methodology proposed in De Cao et al (2020). Specifically for the end-to-end Entity Linking, we aim to reproduce the setting of Kolitsas et al (2018). We evaluate using the aforementioned InKB micro-F 1 with the same defined in-domain and out-of-domain datasets as described by De Cao et al…”
Section: Entity Linkingmentioning
confidence: 99%
“…We plan to identify more accurate entities by relying on attention weights in LMs (Clark et al, 2019;Hewitt & Manning, 2019) instead of using extra resources. We will also investigate stronger entity linkers (Kolitsas et al, 2018) and learn a more robust relation mapping through weak or distant supervision (Mintz et al, 2009;Ratner et al, 2017). We will investigate more sophisticated approaches, such as graph neural networks (Kipf & Welling, 2016), to generate more accurate relation phrases from the attention weight matrices by considering structural information.…”
Section: Analysis Of Unmapped Factsmentioning
confidence: 99%
“…So in this study, we will explore a different approach, which initializes larger position embeddings based on the existing small one in BERT-Base, and can be used directly in the finetuning without expensive retraining. 2019), we adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more expensive but powerful candidate ranking stage (Ganea and Hofmann, 2017;Kolitsas et al, 2018;Wu et al, 2019). We use BM25 for the candidate generation stage and get 64 candidate entities for every mention.…”
Section: Modeling Long Documentsmentioning
confidence: 99%
“…Figure 2 describes how to use BERT for zeroshot entity linking tasks with larger position embeddings. Following Logeswaran et al (2019), we adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more expensive but powerful candidate ranking stage(Ganea and Hofmann, 2017;Kolitsas et al, 2018;Wu et al, 2019). We use BM25 for the candidate generation stage and get 64 candidate entities for every mention.…”
mentioning
confidence: 99%