2023
DOI: 10.3389/fnagi.2023.1238065
|View full text |Cite
|
Sign up to set email alerts
|

An eXplainability Artificial Intelligence approach to brain connectivity in Alzheimer's disease

Nicola Amoroso,
Silvano Quarto,
Marianna La Rocca
et al.

Abstract: The advent of eXplainable Artificial Intelligence (XAI) has revolutionized the way human experts, especially from non-computational domains, approach artificial intelligence; this is particularly true for clinical applications where the transparency of the results is often compromised by the algorithmic complexity. Here, we investigate how Alzheimer's disease (AD) affects brain connectivity within a cohort of 432 subjects whose T1 brain Magnetic Resonance Imaging data (MRI) were acquired within the Alzheimer's… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 102 publications
0
4
0
Order By: Relevance
“…CNN has achieved major advances in classifying AD, but there are still many obstacles to overcome, especially given the dearth well neuroimaging data and its potential application in this area. The authors in Amoroso et al 4 examined how brain connectivity is affected by AD, using T1 brain Magnetic Resonance Imaging data (MRI) acquired within the ADNI. They showed how graph theory-based models can accurately identify these clinical problems and how game theory's SHapley values applied to make developed models understandable and simple to grasp.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…CNN has achieved major advances in classifying AD, but there are still many obstacles to overcome, especially given the dearth well neuroimaging data and its potential application in this area. The authors in Amoroso et al 4 examined how brain connectivity is affected by AD, using T1 brain Magnetic Resonance Imaging data (MRI) acquired within the ADNI. They showed how graph theory-based models can accurately identify these clinical problems and how game theory's SHapley values applied to make developed models understandable and simple to grasp.…”
Section: Related Workmentioning
confidence: 99%
“…More broadly, dementia is described as the greatest global challenge for health care and social services, where globally approximately 50 million people were living with dementia in 2022 2 . AD is the most prevalent of dementia (60–70%), based on the study’s findings the estimation stated that over 150 million individuals will develop dementia by 2050 3 , 4 . AD and Dementia patients will face a variety of challenges, including cognitive impairment, memory loss, behavioral defects, difficulties with vision, and mobility issues that can render it difficult to do daily routine tasks 5 .…”
Section: Introductionmentioning
confidence: 99%
“…It is an ensemble learning method that creates several decision trees and combines their predictions to make a final decision or prediction. Recent advancements in ML techniques resulted in the introduction of eXplainable Artificial Intelligence (XAI) which allows for the identification of the crucial attributes for each instance ( 10 12 ). Explainable AI provides clarity and understanding into the decision-making processes of AI models.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, 1st-order and 2nd-order statistic measures extracted from white matter regions and combined with clinical information were used as inputs to a tree-based algorithm to distinguish svPPA and nfvPPA from healthy controls, and to differentiate between PPA phenotypes. Moreover, the importance of features in the classification performance was evaluated by using a Shapley Additive Explanations (SHAP) method ( Lundberg and Lee, 2017 ), a commonly employed approach widely applied in healthcare systems ( Deshmukh and Merchant, 2020 ; Amoroso et al, 2023 ; Leandrou et al, 2023 ), and able to improve the interpretability of a machine learning model.…”
Section: Introductionmentioning
confidence: 99%