2022
DOI: 10.3390/biomedinformatics2030031
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Machine Learning with Brain Image and Survival Data

Abstract: Recent developments in research on artificial intelligence (AI) in medicine deal with the analysis of image data such as Magnetic Resonance Imaging (MRI) scans to support the of decision-making of medical personnel. For this purpose, machine learning (ML) algorithms are often used, which do not explain the internal decision-making process at all. Thus, it is often difficult to validate or interpret the results of the applied AI methods. This manuscript aims to overcome this problem by using methods of explaina… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 77 publications
0
0
0
Order By: Relevance
“…As a result, it is difficult to validate or interpret the results. The paper by Matthias Eder and his colleagues [12] is an interesting practical example of how to overcome this problem by using methods of explainable AI (XAI). The authors present an application of visual explanations to interpret the decision of an ML algorithm in the case of predicting the survival rate of brain tumor patients based on their MRI scans.…”
Section: Interpretable Complex Predictive Algorithmsmentioning
confidence: 99%
“…As a result, it is difficult to validate or interpret the results. The paper by Matthias Eder and his colleagues [12] is an interesting practical example of how to overcome this problem by using methods of explainable AI (XAI). The authors present an application of visual explanations to interpret the decision of an ML algorithm in the case of predicting the survival rate of brain tumor patients based on their MRI scans.…”
Section: Interpretable Complex Predictive Algorithmsmentioning
confidence: 99%
“…Machine-learning and deep-learning models also need a high computing cost during model training. To improve the interpretability and transparency of these black box models, explainable AI models are currently used in various studies [38][39][40][41][42][43][44][45][46][47]. The collaboration of experts from biology, computer science and different fields can improve the transparency of such methods.…”
Section: Challenges In ML and Dlmentioning
confidence: 99%
“…Several explainable AI models have been developed for biomedical applications such as MRI scan images to predict the survival of brain tumours [38], ECG data to predict cardiovascular disorders [39] and risk factor identification of diabetic retinopathy [40]. In these studies, SHAP analysis has been incorporated to the AI models to interpret the outcome of the classifier.…”
Section: Explainable Aimentioning
confidence: 99%
“…Ibrokhiov et al [24], in response to the rising incidence of pneumonia, particularly in the wake of the COVID-19 pandemic, introduced an advanced DL-based computer-aided diagnostic system, leveraging transfer learning and parallel computing techniques with VGG19 and ResNet 50 models, achieving an impressive average classification accuracy of 96.6% on the COVID-QU-Ex dataset. Edgar M et al [25] address the challenge of interpreting machine learning algorithms applied to medical image data, specifically in predicting brain tumor survival rates from MRI scans. By leveraging explainable AI techniques, such as Shapley overlays, in conjunction with CNN and the BraTS 2020 dataset, this research demonstrates the improved interpretability of key features, facilitating expert validation and enhancing the overall evaluation of predictive outcomes.…”
Section: The Literature Reviewmentioning
confidence: 99%