2020
DOI: 10.48550/arxiv.2004.14545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Deep Learning: A Field Guide for the Uninitiated

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Fortunately, in parallel with the development of AI, the field of explainable AI (XAI) has recently emerged. XAI does not represent a single recipe for understanding AI decision-making, but is rather a conceptual framework in which many different methods are being developed with different underlying assumptions about what “explainable” means (Ras et al, 2020 ). Nevertheless, there are several general traits —evaluation criteria—for XAI.…”
Section: Deep Learning Applications In Neuroimagingmentioning
confidence: 99%
See 3 more Smart Citations
“…Fortunately, in parallel with the development of AI, the field of explainable AI (XAI) has recently emerged. XAI does not represent a single recipe for understanding AI decision-making, but is rather a conceptual framework in which many different methods are being developed with different underlying assumptions about what “explainable” means (Ras et al, 2020 ). Nevertheless, there are several general traits —evaluation criteria—for XAI.…”
Section: Deep Learning Applications In Neuroimagingmentioning
confidence: 99%
“…The cultural relativity of ethics prevents us from imprinting a universal moral code on a model. The ability to understand whether an AI's decision is consistent with the moral code of the environment in which it operates is therefore a more viable solution (Ras et al, 2020 ).…”
Section: Deep Learning Applications In Neuroimagingmentioning
confidence: 99%
See 2 more Smart Citations
“…In image recognition, another family of methods widely used is based on visualization. They express an explanation by highlighting characteristics of the image that objectively influence the output of a DNN [15]. The best known of them, Grad-CAM [16], creates class activation map using the gradients of the DNN's output with respect to the last convolutional layer.…”
Section: Introductionmentioning
confidence: 99%