Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1026
|View full text |Cite
|
Sign up to set email alerts
|

Learning Dynamic Context Augmentation for Global Entity Linking

Abstract: Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space. In this paper, we propose a simple yet effective solution, called Dynamic Context Augmentation (DCA), for collective EL, which requires only one pass through the mentions in a document. DCA sequentially accumulates context i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
53
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(53 citation statements)
references
References 30 publications
0
53
0
Order By: Relevance
“…By integrating the BERTbased entity similarity, our proposed model can correct 124 out of 185 (67.03%) type error cases of the baseline model which demonstrates that we correct more than two third of the type errors produced by the baseline. We have further examined and categorized the remaining 61 type error cases into three categories: (i) Due to prior: golden entities with very lowp(e|m i ) prior, (ii) Due to global: both the local context score and prior score support predicting the golden Methods AIDA-B MSNBC AQUAINT ACE2004 CWEB WIKI Avg Ganea and Hofmann (2017) 92.22 ± 0.14 93.7 ± 0.1 88.5 ± 0.4 88.5 ± 0.3 77.9 ± 0.1 77.5 ± 0.1 85.22 Le and Titov (2018) 93.07 ± 0.27 93.9 ± 0.2 88.3 ± 0.6 89.9 ± 0.8 77.5 ± 0.1 78.0 ± 0.1 85.51 Yang et al (2019) 94.64 ± 0.20 94.6 ± 0.2 87.4 ± 0.5 89.4 ± 0.4 73.5 ± 0.1 78.2 ± 0.1 84.62 BERT-Entity-Sim (local & global) 93.54 ± 0.12 93.4 ± 0.1 89.8 ± 0.4 88.9 ± 0.7 77.9 ± 0. entity, but the overall score supports predicting incorrect entity due to global modeling, (iii) Due to local context: the local context score misleads the model predicting the wrong entity, this is potentially due to the mention context can be misleading, e.g. a document discussing cricket will favor resolving the mention "Australian" in context "impressed by the positive influence of Australian coach Dave Gilbert" to the entity AUSTRALIA NATIONAL CRICKET TEAM instead of the gold entity AUSTRALIA.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…By integrating the BERTbased entity similarity, our proposed model can correct 124 out of 185 (67.03%) type error cases of the baseline model which demonstrates that we correct more than two third of the type errors produced by the baseline. We have further examined and categorized the remaining 61 type error cases into three categories: (i) Due to prior: golden entities with very lowp(e|m i ) prior, (ii) Due to global: both the local context score and prior score support predicting the golden Methods AIDA-B MSNBC AQUAINT ACE2004 CWEB WIKI Avg Ganea and Hofmann (2017) 92.22 ± 0.14 93.7 ± 0.1 88.5 ± 0.4 88.5 ± 0.3 77.9 ± 0.1 77.5 ± 0.1 85.22 Le and Titov (2018) 93.07 ± 0.27 93.9 ± 0.2 88.3 ± 0.6 89.9 ± 0.8 77.5 ± 0.1 78.0 ± 0.1 85.51 Yang et al (2019) 94.64 ± 0.20 94.6 ± 0.2 87.4 ± 0.5 89.4 ± 0.4 73.5 ± 0.1 78.2 ± 0.1 84.62 BERT-Entity-Sim (local & global) 93.54 ± 0.12 93.4 ± 0.1 89.8 ± 0.4 88.9 ± 0.7 77.9 ± 0. entity, but the overall score supports predicting incorrect entity due to global modeling, (iii) Due to local context: the local context score misleads the model predicting the wrong entity, this is potentially due to the mention context can be misleading, e.g. a document discussing cricket will favor resolving the mention "Australian" in context "impressed by the positive influence of Australian coach Dave Gilbert" to the entity AUSTRALIA NATIONAL CRICKET TEAM instead of the gold entity AUSTRALIA.…”
Section: Discussionmentioning
confidence: 99%
“…Better Global Model In order to investigate whether better global model can further boost the performance of our model, we incorporate the recent proposed Dynamic Context Augmentation (DCA) 15 (Yang et al 2019). DCA is a global entity linking model featuring better efficiency and effectiveness than that of Ganea and Hofmann (2017) by breaking the "all-mention coherence" assumption.…”
Section: Incorporating Explicit Entity Typesmentioning
confidence: 99%
See 2 more Smart Citations
“…Our best performance can then be attributed to the quality of textual context learned by the transformers as well as the optimal choice of KG-triples context. Generalizing KG Context: We induced 1-hop KG context in DCA-SL model [12] for candidate entities. The replacement of the unstructured Wikipedia description with structured KG triple context containing entity aliases, entity types, consolidated entity description, etc.…”
Section: Discussionmentioning
confidence: 99%