2022
DOI: 10.48550/arxiv.2205.04766
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Deep Learning Methods in Medical Imaging Diagnosis: A Survey

Abstract: The remarkable success of deep learning has prompted interest in its application to medical diagnosis. Even tough state-ofthe-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the top… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 85 publications
0
6
0
Order By: Relevance
“…For comparatively difficult medical image segmentation, five types of interpretation methods are especially summarized, including attribution map, concept attribution, language description, internal network representation and latent space interpretation. More recently, based on explanation modalities Patricio et al [18] categorized interpretation methods for DLB-MIA into five types: explanation by feature attribution, explanation by text, explanation by examples, explanation by concepts, and other approaches. The corresponding application paradigms are clearly illustrated by figures.…”
Section: Methods Taxonomiesmentioning
confidence: 99%
“…For comparatively difficult medical image segmentation, five types of interpretation methods are especially summarized, including attribution map, concept attribution, language description, internal network representation and latent space interpretation. More recently, based on explanation modalities Patricio et al [18] categorized interpretation methods for DLB-MIA into five types: explanation by feature attribution, explanation by text, explanation by examples, explanation by concepts, and other approaches. The corresponding application paradigms are clearly illustrated by figures.…”
Section: Methods Taxonomiesmentioning
confidence: 99%
“…Medical practitioners can contribute vital insights into the clinical relevance and significance of specific features or patterns found in US images through collaborative efforts, allowing AI algorithms to generate more informed predictions. Incorporating real-time feedback and iterative learning into the diagnosis process can also improve diagnostic accuracy [140]. When physicians interact with the AI system and provide feedback on its predictions, the system can adapt and learn, refining its algorithms and improving its performance over time.…”
Section: Enhancing Diagnostic Accuracymentioning
confidence: 99%
“…Saliency methods produce a visual interpretation map that represent the importance of image pixels for network classification. Class activation mapping is a pioneering saliency method [17]. [32] uses Global Average Pooling (GAP) to integrate information from all features to obtain CAM.…”
Section: Saliency Mapsmentioning
confidence: 99%