Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/550
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Document-Level Label Consistency for Named Entity Recognition

Abstract: Document-level label consistency is an effective indicator that different occurrences of a particular token sequence are very likely to have the same entity types. Previous work focused on better context representations and used the CRF for label decoding. However, CRF-based methods are inadequate for modeling document-level label consistency. This work introduces a novel two-stage label refinement approach to handle document-level label consistency, where a key-value memory network is first used to re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 14 publications
0
14
0
Order By: Relevance
“…Document-level EMD systems like HIRE-NER [38] utilize this tendency to distill non-local information for each unique token, from the entire scope of the document, using a memory-structure and append them to sentence-level contextual embeddings before an EMD decoder draws final output labels. DocL-NER [39] additionally includes a label refinement network to enforce label consistency across documents and improve EMD results. We compare EMD Globalizer with HIRE-NER [38] to test how effectively global information for EMD is compiled in each system.…”
Section: Related Workmentioning
confidence: 99%
“…Document-level EMD systems like HIRE-NER [38] utilize this tendency to distill non-local information for each unique token, from the entire scope of the document, using a memory-structure and append them to sentence-level contextual embeddings before an EMD decoder draws final output labels. DocL-NER [39] additionally includes a label refinement network to enforce label consistency across documents and improve EMD results. We compare EMD Globalizer with HIRE-NER [38] to test how effectively global information for EMD is compiled in each system.…”
Section: Related Workmentioning
confidence: 99%
“…NER with document-level label propagation is usually approached using a two-stage framework [18]- [20]. In the first stage, draft labels for tokens or candidate NEs are predicted by a base model.…”
Section: B Document-level Label Propagation For Named Entity Recognimentioning
confidence: 99%
“…Gui et al [20] introduce a label refinement approach to handle document-level label consistency. In this approach, a key-value memory network is first used to record draft labels predicted by the base model, and then a multi-channel Transformer refines these draft predictions based on the explicit cooccurrence relationship derived from the memory network.…”
Section: B Document-level Label Propagation For Named Entity Recognimentioning
confidence: 99%
“…Indeed, there exist several attempts to utilize global information besides single sentence for NER [1], [23], [11], [4]. However, these methods still easily fail to achieve the above task due to the following facts.…”
Section: Introductionmentioning
confidence: 99%
“…First, they [1], [23], [4] don't provide a sufficiently effective method to address the potential noise problems while introducing global information. Second, they [1], [23], [11] only utilize extra information at the word-level, but ignore the modeling at the sentence level.…”
Section: Introductionmentioning
confidence: 99%