Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.17
|View full text |Cite
|
Sign up to set email alerts
|

Modularized Interaction Network for Named Entity Recognition

Abstract: Although the existing Named Entity Recognition (NER) models have achieved promising performance, they suffer from certain drawbacks. The sequence labeling-based NER models do not perform well in recognizing long entities as they focus only on word-level information, while the segment-based NER models which focus on processing segment instead of single word are unable to capture the word-level dependencies within the segment. Moreover, as boundary detection and type prediction may cooperate with each other for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…Wang et al [14] designed an objective function for training neural models to handle nested entity label sequences as suboptimal paths for nested NER tasks. Li et al [15] developed a network for long names utilizing both segment-level information and word-level dependencies. Wang et al [16] addressed the issue of discontinuous text in NER tasks by adopting a fragment graph approach.…”
Section: Medical Named Entity Recognitionmentioning
confidence: 99%
“…Wang et al [14] designed an objective function for training neural models to handle nested entity label sequences as suboptimal paths for nested NER tasks. Li et al [15] developed a network for long names utilizing both segment-level information and word-level dependencies. Wang et al [16] addressed the issue of discontinuous text in NER tasks by adopting a fragment graph approach.…”
Section: Medical Named Entity Recognitionmentioning
confidence: 99%
“…Decoder. CRF has been widely used in NER task Li et al, 2021). For an input sentence, the probability scores z AU t and z AT t for all tokens x i ∈ X over the argument unit tags and aspect term tags are calculated by CRF decoder:…”
Section: Basic Modulementioning
confidence: 99%
“…Besides BERT-CRF, which is a common model for both KBQA and general NER tasks, researchers have proposed many models for various NER tasks in recent years. For example, some researchers [27] propose a unifed generative framework for various NER subtasks to recognize fat, nested, and discontinuous entities, some researchers [28] focus on utilizing both segment-level information and wordlevel dependencies in NER tasks, and some researchers [29] employ a maximal clique discovery method in a discontinuous NER task. However, a model which is efective in a general NER task may show worse performance in a subject recognition task because of the diference between them, so it is difcult to employ them directly in subject recognition.…”
Section: Related Workmentioning
confidence: 99%