2022
DOI: 10.1007/s10278-022-00712-w
|View full text |Cite
|
Sign up to set email alerts
|

Natural Language Processing Model for Identifying Critical Findings—A Multi-Institutional Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [ 19 ]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [ 11 , 16 , 18 , 20 22 ] and BioBERT [ 11 , 19 , 21 , 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [ 19 ]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [ 11 , 16 , 18 , 20 22 ] and BioBERT [ 11 , 19 , 21 , 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…In Fig 1, we presented the overall framework for generating two versions of summary of findings documented within radiology reports which contains the three primary modules -(i) section segmentation; (ii) noisy data generation for layman summary; (iii) two-step large language model finetuning; and (iv) user evaluation to evaluate the quality of both technical and layman summaries. Section segmentation: We utilize previously developed NLP methods to parse the clinical history, imaging protocol, findings, and impression sections of the radiology reports using section segmentation based on the header [25]. To generalize the section segmentation across multiple institutions, we extracted all the variations of the headers using a similar word list generated by Word2Vec language model trained on 3M radiology reports from Emory University Hospital.…”
Section: Methodsmentioning
confidence: 99%
“…We utilize previously developed NLP methods to parse the clinical history, imaging protocol, findings, and impression sections of the radiology reports using section segmentation based on the header[25]. To generalize the section segmentation across multiple institutions, we extracted all the variations of the headers using a similar word list generated by Word2Vec language model trained on 3M radiology reports from Emory University Hospital.…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, the BioBERT model that was trained on the same input text as BERT supplemented with PubMed abstracts and full-text articles significantly outperforms BERT on biomedical named entity recognition, question answering and relation extraction [19]. Within the existing literature in the clinical domain, domain-specific models are shown to outperform fine-tuned general BERT models virtually every time a direct comparison takes place, such as for the aforementioned ClinicalBERT [11,16,18,[20][21][22] and BioBERT [11,19,21,23,24].…”
Section: Introductionmentioning
confidence: 94%