2022
DOI: 10.1177/14604582221131198
|View full text |Cite
|
Sign up to set email alerts
|

The natural language processing of radiology requests and reports of chest imaging: Comparing five transformer models’ multilabel classification and a proof-of-concept study

Abstract: Background Radiology requests and reports contain valuable information about diagnostic findings and indications, and transformer-based language models are promising for more accurate text classification. Methods In a retrospective study, 2256 radiologist-annotated radiology requests (8 classes) and reports (10 classes) were divided into training and testing datasets (90% and 10%, respectively) and used to train 32 models. Performance metrics were compared by model type (LSTM, Bertje, RobBERT, BERT-clinical, B… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [ 19 ]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [ 11 , 16 , 18 , 20 22 ] and BioBERT [ 11 , 19 , 21 , 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [ 19 ]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [ 11 , 16 , 18 , 20 22 ] and BioBERT [ 11 , 19 , 21 , 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [19]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [11,16,18,[20][21][22] and BioBERT [11,19,21,23,24].…”
Section: Introductionmentioning
confidence: 94%
“…A popular improvement suggestion, the so-called Robustly Optimized BERT pre-training Approach (RoBERTa) model, makes use of a more optimized set of hyperparameters and a more dynamic pre-training task [7]. The application of the RoBERTa architecture has shown to outperform the original BERT model in direct comparisons on a relatively large amount of tasks [7][8][9][10][11][12][13][14][15][16][17]. Over the years, further adjustments have been made to the RoBERTa architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Similar tendencies can be observed in the medical domain. The language-specific Dutch model, RobBERT, outperformed multilingual BERT (mBERT) on the multilabel classification of chest imaging requests and report items [51]. The Swedish KB-BERT model outperformed mBERT when fine-tuned for the de-identification task, albeit marginally [52].…”
Section: Pretrained Language Models For Loementioning
confidence: 99%