2021
DOI: 10.48550/arxiv.2105.02357
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

Abstract: In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…5.1 Use SIPA to evaluate different explanation approaches [24] compared different explanatory methods (e.g., Lime, SHAPE, and CUI) and found that they have different effects on experienced transparency, understandability, or satisfaction. Overall, several studies indicate a close connection between the function of an explanation and the situation in which it is given, e.g., [5].…”
Section: Discussionmentioning
confidence: 99%
“…5.1 Use SIPA to evaluate different explanation approaches [24] compared different explanatory methods (e.g., Lime, SHAPE, and CUI) and found that they have different effects on experienced transparency, understandability, or satisfaction. Overall, several studies indicate a close connection between the function of an explanation and the situation in which it is given, e.g., [5].…”
Section: Discussionmentioning
confidence: 99%
“…However, DL models are considered the least interpretable machine learning models due to the inherent mathematical complexity; thus, not providing a reasoning for the prediction and, consequently, decreasing the trust in these models [138]. When utilizing these black-box models in the medical domain, it is critical to have systems that are trustworthy and reliable to the clinicians, therefore raising the need to make these approaches more transparent and understandable to humans [139].…”
Section: Interpretability Methods For Nodule-focused Cadsmentioning
confidence: 99%
“…These are off-the-shelf agnostic methods that can be found in libraries, such as PyTorch Captum [142]. This post-model approach was implemented by Knapič et al [139], where two popular post-hoc methods, local interpretable model-agnostic explanations (LIME), and SHAPley Additive exPlanations (SHAPs) were compared in terms of understandability for humans in the predictive model with the same medical image dataset.…”
Section: Interpretability Methods For Nodule-focused Cadsmentioning
confidence: 99%
“…They provide results demonstrating the expressiveness of SHAP values in terms of discrimination ability between different output classes and better alignment with human intuition compared to many other existing methods [62]. Several works have adopted the SHAP method for image classification or object detection [67,68].…”
mentioning
confidence: 86%