2023
DOI: 10.1007/978-1-0716-3195-9_22
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability of Machine Learning Methods Applied to Neuroimaging

Abstract: Deep learning methods have become very popular for the processing of natural images and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“… 75 However, none of the studies identified through our review incorporated clinicians to systematically validate model explanations. Although there is a range of supporting literature, perspectives and reviews highlighting the need for interpretable machine learning in medical imaging, 11 , 27 , 29 , 76 , 77 being able to demonstrate its impact through semi‐structured interview and qualitative analysis would be a key step toward proving how such techniques can fulfil it. Moreover, the complexity and heterogeneity of neurogenerative disease pathology has limited researchers’ ability to make conclusive statements about newly identified regions of interest.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“… 75 However, none of the studies identified through our review incorporated clinicians to systematically validate model explanations. Although there is a range of supporting literature, perspectives and reviews highlighting the need for interpretable machine learning in medical imaging, 11 , 27 , 29 , 76 , 77 being able to demonstrate its impact through semi‐structured interview and qualitative analysis would be a key step toward proving how such techniques can fulfil it. Moreover, the complexity and heterogeneity of neurogenerative disease pathology has limited researchers’ ability to make conclusive statements about newly identified regions of interest.…”
Section: Discussionmentioning
confidence: 99%
“… 28 Similarly, Thibeau‐Sutre and colleagues performed a review on interpretable methods in neuroimaging, where they highlighted various methods and assessed their reliability. 29 However, to our knowledge this systematic review is the first to consider both imaging and non–imaging‐based machine learning methods for dementia diagnosis, where model interpretability is a specific inclusion criterion. Our review is also not limited to Alzheimer's disease but considers approaches that include a range of dementia‐causing neurodegenerative diseases.…”
Section: Introductionmentioning
confidence: 99%