2021 IEEE Symposium on Computers and Communications (ISCC) 2021
DOI: 10.1109/iscc53001.2021.9631485
|View full text |Cite
|
Sign up to set email alerts
|

Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling

Abstract: Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans by making the decisions of the formers understandable. This is particularly important in sensitive contexts like in the medical domain. We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples. Our framew… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
14
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 19 publications
1
14
0
Order By: Relevance
“…In this section we briefly present the two main components of the methodology adopted to classify and explain the ISIC dataset. Details can be found in [15,14,3].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…In this section we briefly present the two main components of the methodology adopted to classify and explain the ISIC dataset. Details can be found in [15,14,3].…”
Section: Methodsmentioning
confidence: 99%
“…We summarize here the customization of abele we carried on in order to make it usable for the complex image classification task of the ISIC dataset. Details can be found in [3]. Generative Adversarial models are generally not easy to train as they are usually affected by a number of common failures.…”
Section: Progressive Growing Aaementioning
confidence: 99%
See 3 more Smart Citations