2021
DOI: 10.3390/make3030037
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain

Abstract: In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 97 publications
(38 citation statements)
references
References 25 publications
0
27
0
Order By: Relevance
“…However, DL models are considered the least interpretable machine learning models due to the inherent mathematical complexity; thus, not providing a reasoning for the prediction and, consequently, decreasing the trust in these models [ 138 ]. When utilizing these black-box models in the medical domain, it is critical to have systems that are trustworthy and reliable to the clinicians, therefore raising the need to make these approaches more transparent and understandable to humans [ 139 ].…”
Section: Computer-aided Decision Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, DL models are considered the least interpretable machine learning models due to the inherent mathematical complexity; thus, not providing a reasoning for the prediction and, consequently, decreasing the trust in these models [ 138 ]. When utilizing these black-box models in the medical domain, it is critical to have systems that are trustworthy and reliable to the clinicians, therefore raising the need to make these approaches more transparent and understandable to humans [ 139 ].…”
Section: Computer-aided Decision Systemsmentioning
confidence: 99%
“…These are off-the-shelf agnostic methods that can be found in libraries, such as PyTorch Captum [ 142 ]. This post-model approach was implemented by Knapič et al [ 139 ], where two popular post-hoc methods, local interpretable model-agnostic explanations (LIME), and SHAPley Additive exPlanations (SHAPs) were compared in terms of understandability for humans in the predictive model with the same medical image dataset.…”
Section: Computer-aided Decision Systemsmentioning
confidence: 99%
“…The CIU Github site https://github.com/ KaryFramling/ciu provides executable examples at least for the well-known benchmark data sets Iris, Boston, Heart Disease, UCI Cars, Diamonds, Titanic and Adult and several different machine learning models. CIU is also implemented for image explanations as reported in [5,7]. The source code used in this paper is published at https://github.com/KaryFramling/AJCAI_2021.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Neural networks are a favored analytical method for numerous predictive data mining applications because of their power, adaptability, and ease of usage. Predictive neural networks are specially valuable in applications where the underlying process is complex [ 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ], such as biological systems [ 44 ]. Both the multilayer perceptron (MLP) and radial basis function (RBF) network have a feedforward architecture, because the connections in the network flow forward the input layer (predictors) to the output layer (responses).…”
Section: Introductionmentioning
confidence: 99%