2020
DOI: 10.1109/access.2020.3011598
|View full text |Cite
|
Sign up to set email alerts
|

A Bidirectional Iterative Algorithm for Nested Named Entity Recognition

Abstract: Nested named entity recognition (NER) is a special case of structured prediction in which annotated sequences can be contained inside each other. It is a challenging and significant problem in natural language processing. In this paper, we propose a novel framework for nested named entity recognition tasks. Our approach is based on a deep learning model which can be called in an iterative way, expanding the set of predicted entity mentions with each subsequent iteration. The proposed framework combines two suc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…In terms of developing NER models trained on this corpus, future work includes improving the recall for disease and medication, due to the importance of identifying these entities, and dealing with nested NER, for which there is a variety of approaches as summarized in (Dadas and Protasiewicz, 2020). Besides, our annotated corpus has hierarchical entities (for example, test result and sign/symptom are part of the entity finding).…”
Section: Discussionmentioning
confidence: 99%
“…In terms of developing NER models trained on this corpus, future work includes improving the recall for disease and medication, due to the importance of identifying these entities, and dealing with nested NER, for which there is a variety of approaches as summarized in (Dadas and Protasiewicz, 2020). Besides, our annotated corpus has hierarchical entities (for example, test result and sign/symptom are part of the entity finding).…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning techniques, particularly recurrent neural networks (RNNs) and recently, transformer-based architectures like BERT (Devlin et al, 2019) and GPT (Radford et al, 2019), revolutionized NER. These models leverage contextual embeddings to capture intricate relationships and dependencies, achieving state-of-the-art results in various languages and domains for both flat (Xia et al, 2019;Zheng et al, 2019;Arkhipov et al, 2019;Lothritz et al, 2020;Yu et al, 2020;Yang et al, 2021) and nested (Sohrab and Miwa, 2018;Katiyar and Cardie, 2018;Dadas and Protasiewicz, 2020;Wang et al, 2020) entities.…”
Section: Related Workmentioning
confidence: 99%
“…We conduct experiments on the dev sets of the three benchmark datasets to investigate the effects of the negative sampling strategy. We set the negative span count δ to different values, i.e., 0, 2,4,6,8,10,20,30,40,50,100,150,200, 250 and 300. We report the experimental results in Figure 3, from which we observe that a sufficient number of negative spans is essential to ensure that the model performs well.…”
Section: Performance Against Negative Sampling Strategymentioning
confidence: 99%
“…The span-based model is appropriate for resolving the above problem since it completely abstains from the sequence labeling mechanism which eliminates the requirement for label dependency. Previous span-based NER models [1,[7][8][9] are proposed to recognize nested entities by regarding text spans as candidate entities where spans can be nested. Text spans are continuous text segments of which the length is restricted by a length threshold [10,11].…”
Section: Introductionmentioning
confidence: 99%