2022
DOI: 10.1007/s11063-022-10933-3
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Task BERT-BiLSTM-AM-CRF Strategy for Chinese Named Entity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…Before experimenting, a dataset of online news domains is required, which is obtained by utilizing the ResumeNER corpus to identify the named entities of financial domain information in online news, thereby enabling sentiment monitoring based on it. Subsequently, training data proportions ranging from 20% to 50% are selected from the experimental corpus as training sets for comparison scheme models such as BiLSTM-CRF ( Luo et al, 2018 ), LM-LSTM-CRF ( Shi, 2022 ), Lite-LSTM ( Tang et al, 2022 ), and trigger matching networks (TMN) ( Lin et al, 2020 ), a concept and trigger matching network framework for entity triggers. The model proposed in this article is trained using 5% to 20% of the training data.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Before experimenting, a dataset of online news domains is required, which is obtained by utilizing the ResumeNER corpus to identify the named entities of financial domain information in online news, thereby enabling sentiment monitoring based on it. Subsequently, training data proportions ranging from 20% to 50% are selected from the experimental corpus as training sets for comparison scheme models such as BiLSTM-CRF ( Luo et al, 2018 ), LM-LSTM-CRF ( Shi, 2022 ), Lite-LSTM ( Tang et al, 2022 ), and trigger matching networks (TMN) ( Lin et al, 2020 ), a concept and trigger matching network framework for entity triggers. The model proposed in this article is trained using 5% to 20% of the training data.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In another study ( Kim et al, 2020 ), the classical LSTM+CRF structure was employed for NER. In a survey of Chinese NER ( Tang et al, 2022 ), a lattice-LSTM structure was proposed to mine better the character features in Chinese. Various word segmentation results are introduced into the model, and word information is remotely transmitted to nodes to form a “grid structure” to improve efficiency.…”
Section: Related Workmentioning
confidence: 99%
“…Under the same experimental settings, we compared the performance of our EMLB model with the methods proposed by Nasar et al ( 2021 ), Fabregat et al ( 2023 ), Govindarajan et al ( 2023 ), Ke et al ( 2023 ), Laursen et al ( 2023 ), Tang et al ( 2023 ). After conducting multiple independent experiments, we used the average F1 score as the evaluation metric.…”
Section: Methodsmentioning
confidence: 99%
“…The evaluation corpus used in related research comes from English media organizations such as The Wall Street Times, The New York Times, and Wikipedia. The task of named entity recognition has stabilized and developed through a process of continuous improvement [4] . In the 1950s, researchers rst studied structured entities in papers and medical records.…”
Section: Introductionmentioning
confidence: 99%