2022
DOI: 10.1007/s42979-022-01371-y
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Workflow to Explain Black-Box Models for Early Alzheimer’s Disease Classification Evaluated for Multiple Datasets

Abstract: Hard-to-interpret black-box Machine Learning (ML) was often used for early Alzheimer’s Disease (AD) detection. To interpret eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and Support Vector Machine (SVM) black-box models, a workflow based on Shapley values was developed. All models were trained on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and evaluated for an independent ADNI test set, as well as the external Australian Imaging and Lifestyle flagship study of Ageing (AIBL), and O… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 86 publications
0
11
0
Order By: Relevance
“…Hamza et al [121] experimented with neural network models for early AD detection by employing classification approaches utilizing a hybrid dataset from Kaggle and OASIS. Louis et al [154] propose a machine learning workflow to train and interpret different Blackbox models and to compare its performance. All models were trained and evaluated on ADNI, AIBL and OASIS datasets.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Hamza et al [121] experimented with neural network models for early AD detection by employing classification approaches utilizing a hybrid dataset from Kaggle and OASIS. Louis et al [154] propose a machine learning workflow to train and interpret different Blackbox models and to compare its performance. All models were trained and evaluated on ADNI, AIBL and OASIS datasets.…”
Section: Resultsmentioning
confidence: 99%
“…They consequently provide the doctors with an understanding of how and why the model makes judgements. SHAP is used by Ahmed et al [152] and Louise et al [154] to determine the order of informative predictors in test data. ML models and their relationships were also visualised and analysed using SHAP summary plots.…”
Section: Visualmentioning
confidence: 99%
See 3 more Smart Citations