2017
DOI: 10.1093/bioinformatics/btx172
|View full text |Cite
|
Sign up to set email alerts
|

A transition-based joint model for disease named entity recognition and normalization

Abstract: dhji@whu.edu.cn.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(49 citation statements)
references
References 9 publications
0
47
0
2
Order By: Relevance
“…Interesting though are differences for the same approach on different evaluation data sets. For the second-best system by Sachan et al [47], F 1 scores differ for BC5CDR and NCBI by 2.0 (for the third-best [46] by 2.7) percentage points, whereas for the best non-DL approach by Lou et al [48], this difference amounts to remarkable 4.1 percentage points. This hints at a strong dependence of the results of the same system set-up on the specific corpus these results have been worked out and, thus, limits generalizability.…”
Section: Medical Information Extraction In the Age Of Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Interesting though are differences for the same approach on different evaluation data sets. For the second-best system by Sachan et al [47], F 1 scores differ for BC5CDR and NCBI by 2.0 (for the third-best [46] by 2.7) percentage points, whereas for the best non-DL approach by Lou et al [48], this difference amounts to remarkable 4.1 percentage points. This hints at a strong dependence of the results of the same system set-up on the specific corpus these results have been worked out and, thus, limits generalizability.…”
Section: Medical Information Extraction In the Age Of Deep Learningmentioning
confidence: 99%
“…Lee et al [20] Sachan et al [47] Wang et al [46] Xu et al [49] Hong and Lee [51] Beltagy et al [19] Xu et al [49] Zhao et al [39] Zhao et al [39] Sachan et al [47] Lee et al [20] Hong and Lee [51] Lou et al [48] Wang et al [46] Lou et al [48] Corpus BioBERT v1.1 [20]: self-trained BERT on PubMed pre-trained embeddings + self-trained on PubMed pre-trained: [16] word embeddings [50] self-trained on PubMed and PMC pre-trained word embeddings (PubMed): [16,52] SciBERT [19]: self-trained BERT on biomedical full texts from Semantic Scholar word embeddings [50] self-trained on PubMed and PMC pre-trained: [16,[53][54][55] pre-trained: [16,[53][54][55] pre-trained embeddings + self-trained on PubMed BioBERT V1.1 [20]: self-trained BERT on PubMed pre-trained word embeddings (PubMed): [16,52] n/a pre-trained: [16] n/a…”
Section: Citationsmentioning
confidence: 99%
“…The first model uses a separate transition system for each level in the hierarchy and enforces nesting constraints to produce non-overlapping structures. The transition system used in this model allows us to construct representations of multi-word segments which have been shown to be essential for multi-word annotations such as NER and chunking ( [7], [9], [10] and [11]). We will refer to this model as the Multitask Shift-Reduce (MT-SR for short) system.…”
Section: Transition-based Slot Fillingmentioning
confidence: 99%
“…normalization. Lou et al (2017) proposed a transition-based model to jointly perform medical named entity recognition and normalization, casting the output construction process into an incremental state transition process. However, these existing joint modeling methods (1) rely heavily on handcrafted features and task specific resources thus fail to encode complicated and general features such as characterlevel and semantic-level features; (2) use simplistic ways to jointly model medical named entity recognition and normalization, which cannot provide essential mutual supports between these two.…”
Section: Introductionmentioning
confidence: 99%