2022
DOI: 10.1038/s41598-022-24721-5
|View full text |Cite|
|
Sign up to set email alerts
|

Prediction of oxygen requirement in patients with COVID-19 using a pre-trained chest radiograph xAI model: efficient development of auditable risk prediction models via a fine-tuning approach

Abstract: Risk prediction requires comprehensive integration of clinical information and concurrent radiological findings. We present an upgraded chest radiograph (CXR) explainable artificial intelligence (xAI) model, which was trained on 241,723 well-annotated CXRs obtained prior to the onset of the COVID-19 pandemic. Mean area under the receiver operating characteristic curve (AUROC) for detection of 20 radiographic features was 0.955 (95% CI 0.938–0.955) on PA view and 0.909 (95% CI 0.890–0.925) on AP view. Coexisten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…Given the important ongoing discourse [3][4][5][6][7][8] surrounding bias in the clinical setting and bias in artificial intelligence, we believe our analysis of ChatGPT's performance based on the age and gender of patients represents an important touchpoint in both discussions. [21][22][23][24][25] While we did not find that age or gender is a significant predictor of accuracy, we note that our vignettes represent classic presentations of disease, and that atypical presentations may generate different biases.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the important ongoing discourse [3][4][5][6][7][8] surrounding bias in the clinical setting and bias in artificial intelligence, we believe our analysis of ChatGPT's performance based on the age and gender of patients represents an important touchpoint in both discussions. [21][22][23][24][25] While we did not find that age or gender is a significant predictor of accuracy, we note that our vignettes represent classic presentations of disease, and that atypical presentations may generate different biases.…”
Section: Discussionmentioning
confidence: 99%
“…Despite its relative infancy, artificial intelligence (AI) is transforming healthcare, with current uses including workflow triage, predictive models of utilization, labeling and interpretation of radiographic images, patient support via interactive chatbots, communication aids for non-English speaking patients, and more. [1][2][3][4][5][6][7][8] Yet, all of these use cases are limited to a specific part of the clinical workflow and do not provide longitudinal patient or clinician support. An under-explored use of AI in medicine is predicting and synthesizing patient diagnoses, treatment plans, and outcomes.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of medical platforms, explainable AI (XAI) is crucial 61 , particularly when it comes to the prediction of myocardial infarction (MI) probability using survey data. Transparency and interpretability 62 are crucial in the healthcare industry since decisions based on AI-driven models may have far-reaching effects 32,34 . In addition to improving the credibility and dependability of predictive models, XAI provides healthcare professionals with the knowledge required to comprehend the logic behind AI-generated predictions.…”
Section: Interpretability Analysismentioning
confidence: 99%
“…The interpretability and explainability of artificial intelligence (AI) models are critical in the medical arena since healthcare practitioners demand insights into the model’s decision-making process 32,33 . Deep learning models, particularly neural networks, have been criticized for their “black-box” nature, which makes it difficult to grasp the logic behind the predictions made by these approaches 34,35,36,37,38,39,40 . This study intends to overcome these important issues by proposing reliable, explainable, and thus more transparent methods for exploring cutting-edge deep-learning techniques for medical research and practice.…”
Section: Introductionmentioning
confidence: 99%
“…Between the first COVID-19 diagnosis in France and the availability of these templates, French radiologists wrote their reports according to their own experience in thoracic imaging and the objective abnormalities on chest CT. So far, most studies using artificial intelligence have applied a supervised methodology on medical images in order to perform patients’ triage, distinguishing common pneumonitis from COVID-19 lung disease, assessing the severity of the COVID-19 lung disease, or anticipating oxygen requirement thanks to classical machine-learning or deep-learning algorithms [ 12 16 ]. Regarding NLP application, Li et al trained supervised machine-learning models to automatically identify CT reports with the diagnosis of acute appendicitis, diverticulitis, and bowel obstruction and secondarily applied those models on a large population to investigate the impact of the COVID-19 pandemic on their detection in emergency departments [ 17 ].…”
Section: Introductionmentioning
confidence: 99%