2023
DOI: 10.1109/access.2023.3327808
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Machine-Learning Models for COVID-19 Prognosis Prediction Using Clinical, Laboratory and Radiomic Features

Francesco Prinzi,
Carmelo Militello,
Nicola Scichilone
et al.

Abstract: The SARS-CoV-2 virus pandemic had devastating effects on various aspects of life: clinical cases, ranging from mild to severe, can lead to lung failure and to death. Due to the high incidence, datadriven models can support physicians in patient management. The explainability and interpretability of machine-learning models are mandatory in clinical scenarios. In this work, clinical, laboratory and radiomic features were used to train machine-learning models for COVID-19 prognosis prediction. Using Explainable A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 68 publications
(117 reference statements)
0
3
0
Order By: Relevance
“…Subsequently, a cross-validation strategy is often employed exclusively in the training set. This cross-validation approach, consisting of dividing the training set into subsets (called folds), and in each round select one of these folds as the validation set and the others as the training set, is used for both training and fine-tuning the model, and, ultimately, the model’s performance is assessed on the dedicated test set [ 18 ]. In the case of very small datasets (about less than 100 samples), the leave-one-out method is typically employed [ 19 , 20 ].…”
Section: Classifiers: Main Conceptsmentioning
confidence: 99%
See 1 more Smart Citation
“…Subsequently, a cross-validation strategy is often employed exclusively in the training set. This cross-validation approach, consisting of dividing the training set into subsets (called folds), and in each round select one of these folds as the validation set and the others as the training set, is used for both training and fine-tuning the model, and, ultimately, the model’s performance is assessed on the dedicated test set [ 18 ]. In the case of very small datasets (about less than 100 samples), the leave-one-out method is typically employed [ 19 , 20 ].…”
Section: Classifiers: Main Conceptsmentioning
confidence: 99%
“…Conversely, a local explanation focuses on elucidating the system’s decision for a particular instance, such as a patient. This approach allows for a detailed examination of the model’s findings and facilitates clinical validation and comparisons with existing medical literature [ 18 ]. These considerations carry significant ethical, legal, and trust-related implications.…”
Section: How To Choose a Classifiermentioning
confidence: 99%
“…F. Prinzi et.al proposed the significance of employing explainable AI algorithms in clinical settings to ensure the interpretability of predictive models for COVID-19 prognosis. This emphasis on interpretability facilitates informed clinical decision-making [29]. De et al [30] introduced a widely practiced intervention for modifying cardiac health, highlighting the varied effects of physical activity on older adults.…”
Section: Introductionmentioning
confidence: 99%