Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1585
|View full text |Cite
|
Sign up to set email alerts
|

Merge and Label: A Novel Neural Network Architecture for Nested NER

Abstract: Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(57 citation statements)
references
References 17 publications
0
57
0
Order By: Relevance
“…However, they also use the maximal-length hyperparameter to reduce time complexity. Fisher and Vlachos (2019) proposed a novel neural network architecture that merges tokens or entities into entities forming nested structures and then labels each of them. Their architecture, however, needs the maximal nesting level hyperparameter.…”
Section: Flat Nermentioning
confidence: 99%
“…However, they also use the maximal-length hyperparameter to reduce time complexity. Fisher and Vlachos (2019) proposed a novel neural network architecture that merges tokens or entities into entities forming nested structures and then labels each of them. Their architecture, however, needs the maximal nesting level hyperparameter.…”
Section: Flat Nermentioning
confidence: 99%
“…Lin et al [34] proposed a sequence-to-nugget architecture that uses a head-driven phrase structure for nested NE recognition. In Table 7, the BERT is used in Xia et al [55], Fisher et al [56], Shibuya et al [57], StrakovÂt'a et al [59] and Jue et al [84]. Compared with them, our model achieves state-of-theart performance in the task of nested NE recognition.…”
Section: Comparing With Other Methodsmentioning
confidence: 99%
“…To investigate the effectiveness and efficiency of our proposed method, we conduct comprehensive experiments on three benchmark NER datasets. (Lin et al, 2019)[POS] 76.2 73.6 74.9 75.8 73.9 74.8 M&L (Fisher and Vlachos, 2019) 75.1 74.1 74.6 ---Bound. Aware (Zheng et al, 2019) ---75.9 73.6 74.7 BENSC (Tan et al, 2020)[POS] 77.1 74.2 75.6 78.9 72.7 75.7 Our Model(LSTM) [POS] 78.5 74.6 76.5 77.4 73.9 75.6 datasets without nested entities are called as flat NER datasets.…”
Section: Methodsmentioning
confidence: 99%