2023
DOI: 10.1016/j.ins.2022.12.049
|View full text |Cite
|
Sign up to set email alerts
|

Chinese named entity recognition method for the finance domain based on enhanced features and pretrained language models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Our experiment mainly uses the Pytorch and Huggingface libraries. The pretrained model of T5 is the Mengzi‐t5‐base model [32]. The latent variable z dimension in CVAE is set to 768, and the hidden linear size of the CVAE network is also set to 768.…”
Section: Dataset and Simulation Parametersmentioning
confidence: 99%
“…Our experiment mainly uses the Pytorch and Huggingface libraries. The pretrained model of T5 is the Mengzi‐t5‐base model [32]. The latent variable z dimension in CVAE is set to 768, and the hidden linear size of the CVAE network is also set to 768.…”
Section: Dataset and Simulation Parametersmentioning
confidence: 99%
“…For instance, Srivastava [16] performed named entity recognition based on word embedding and deep learning models for web information security texts. Similarly, Zhang [17] proposed a pre-trained Chinese financial domain named entity recognition model that contains two sub-models for financial entity boundary delineation and financial entity classification. Puccetti [18] provided a patented text named entity recognition system that combines rule-based, gazetteer, and deep learning techniques.…”
Section: Related Workmentioning
confidence: 99%
“…The majority of the biomedical text mining community has been using deep learning-based and specifically Transformerbased methods [Miranda-Escalada et al, 2023] to perform several tasks, including NER, RE and Question-Answering. Moreover, Large Language Models have recently emerged as an alternative also for tasks other than Question-Answering [Wang et al, 2023]. We wanted to explore how we could take advantage of deep learning-based methods to improve the JensenLab suite of web resources.…”
Section: Introductionmentioning
confidence: 99%