2020
DOI: 10.1093/jamia/ocaa189
|View full text |Cite
|
Sign up to set email alerts
|

Clinical concept extraction using transformers

Abstract: Objective The goal of this study is to explore transformer-based models (eg, Bidirectional Encoder Representations from Transformers [BERT]) for clinical concept extraction and develop an open-source package with pretrained clinical models to facilitate concept extraction and other downstream natural language processing (NLP) tasks in the medical domain. Methods We systematically explored 4 widely used transformer-based archi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
65
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(66 citation statements)
references
References 48 publications
1
65
0
Order By: Relevance
“…For NLU tasks, networks in the literature on biomedical texts span from convolutional neural networks [ 41 ] to transformers (e.g. BERT and its variants) [ 51 ].…”
Section: Framework and Discussionmentioning
confidence: 99%
“…For NLU tasks, networks in the literature on biomedical texts span from convolutional neural networks [ 41 ] to transformers (e.g. BERT and its variants) [ 51 ].…”
Section: Framework and Discussionmentioning
confidence: 99%
“…Thus, the results obtained in this work for automatic clinical coding in Spanish support the hypothesis asserting that, when adapted to the specificities of the clinical domain, transformer-based models outperformed their original nonspecific domain versions on downstream medical tasks. Although this hypothesis was already explored in previous works [41]- [43], its validity has only been demonstrated for clinical NLP tasks in the English language, for which a considerable amount of standardized and curated medical linguistic resources is publicly available. In contrast, in this study, we have experimentally shown the effectiveness of the clinical domain adaptation of transformers when applied to small-data NLP tasks in a non-English language with limited textual resources.…”
Section: A Domain-specific Modelsmentioning
confidence: 99%
“…We mainly focused on clinical coding, which is a document-level NLP task. However, the domain-specific transformers examined in this work could also be applied to word-level NLP problems, such as NER tasks, for which transformer-based models have been shown to achieve SOTA performance in the clinical NLP domain [41]- [43], mainly for the English language. Additionally, future studies should pay special attention to model interpretability, as though a few efforts have already been made in this particular area, most of DL models are still regarded as "black-boxes".…”
Section: B Fine-tuning Approachmentioning
confidence: 99%
“…Among these new context-aware language models, the Transformer [24] has undoubtedly stood out as the new deep learning SOTA architecture in the field of NLP. BERT [6], RoBERTa [13] and XLM-R [5] are examples of transformer-based models that have become the new SOTA for question answering, text summarization or NER tasks, also in the field of clinical NLP [21,1,28]. One of the main characteristics of the Transformer architecture is the self-attention mechanism it uses, which allows the model to parallelize a large part of the network architecture, increasing computing efficiency.…”
Section: Transformer-based Modelsmentioning
confidence: 99%