2022
DOI: 10.3233/shti220112
|View full text |Cite
|
Sign up to set email alerts
|

Natural Language Processing to Identify Abnormal Breast, Lung, and Cervical Cancer Screening Test Results from Unstructured Reports to Support Timely Follow-up

Abstract: Cancer screening and timely follow-up of abnormal results can reduce mortality. One barrier to follow-up is the failure to identify abnormal results. While EHRs have coded results for certain tests, cancer screening results are often stored in free-text reports, which limit capabilities for automated decision support. As part of the multilevel Follow-up of Cancer Screening (mFOCUS) trial, we developed and implemented a natural language processing (NLP) tool to assist with real-time detection of abnormal cancer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Of these, 28 studies include radiology reports from mammography ( 40 67 ). Primary objectives of these 28 studies included extracting relevant information based on pre-defined terms ( 42 , 50 , 56 , 59 , 62 , 63 , 65 67 ), identifying and characterizing abnormal findings (e.g., location, laterality, related sentences) ( 44 , 48 , 49 , 58 , 60 ), inference of BI-RADS final assessment categories by analyzing the findings section of radiology reports ( 46 , 55 ), identifying abnormal screening results requiring follow-up or as determined by subsequent pathology reports ( 40 , 41 , 43 ), determination of breast tissue composition class ( 51 ), and risk assessment or risk stratification of findings within BI-RADS categories for malignancy ( 45 , 53 ). Two studies are related to the development of NLP techniques to assist radiologists by providing word suggestions ( 47 ) and proposition of new RADLex dictionary terms ( 64 ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Of these, 28 studies include radiology reports from mammography ( 40 67 ). Primary objectives of these 28 studies included extracting relevant information based on pre-defined terms ( 42 , 50 , 56 , 59 , 62 , 63 , 65 67 ), identifying and characterizing abnormal findings (e.g., location, laterality, related sentences) ( 44 , 48 , 49 , 58 , 60 ), inference of BI-RADS final assessment categories by analyzing the findings section of radiology reports ( 46 , 55 ), identifying abnormal screening results requiring follow-up or as determined by subsequent pathology reports ( 40 , 41 , 43 ), determination of breast tissue composition class ( 51 ), and risk assessment or risk stratification of findings within BI-RADS categories for malignancy ( 45 , 53 ). Two studies are related to the development of NLP techniques to assist radiologists by providing word suggestions ( 47 ) and proposition of new RADLex dictionary terms ( 64 ).…”
Section: Resultsmentioning
confidence: 99%
“…Fifteen of the studies included data from other cancers or diseases in addition to breast cancer. Of these studies, four developed or evaluated NLP systems using the same methodology as for other cancers ( 14 , 43 , 57 , 73 ). Five studies developed or evaluated NLP systems for non-cancer disease or disease sites including diabetes ( 66 ), disease observable on bone radiograph ( 40 ), disease observable from head and neck, abdominal, or pelvic ultrasounds ( 70 ), neuroimaging ( 69 ), or various diseases for which confirmation was required by pathology or further radiology studies ( 81 ).…”
Section: Resultsmentioning
confidence: 99%
“…NLP models have been implemented on free-text pathology reports to assist with timely follow-ups for abnormal results [ 22 ]. With the exponentially growing medical literature on breast cancer, the use of NLP becomes more essential [ 49 ].…”
Section: Clinical Applications and Nlp Methods In Breast Imagingmentioning
confidence: 99%
“…Relevant guideline recommendations and specialist input were used to create automated electronic health record (EHR) algorithms to identify patient eligibility and determine a recommended follow-up period and appropriate diagnostic follow-up (eTable 1 in Supplement 2). 25,26 The exception was short-interval colonoscopy in which the follow-up time frame was determined by the gastrointestinal specialist performing the procedure. Designed to supplement usual care, additional time beyond the due date for the abnormal test result was added to allow for completion of the recommended follow-up before a patient became eligible (eFigure 1 in Supplement 2).…”
Section: Study Design and Participantsmentioning
confidence: 99%