2018
DOI: 10.1007/978-3-319-98131-4_2
|View full text |Cite
|
Sign up to set email alerts
|

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Abstract: Issues regarding explainable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
107
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 156 publications
(115 citation statements)
references
References 52 publications
0
107
0
Order By: Relevance
“…The use of big data to train AI-models has, to name a few, been adopted by government agencies, institutions of law, human resources, and medical decision systems. A number of such models have been shown to make decisions based on the gender or ancestral origin of an individual, leading to concerns about their "fairness" [15,16,17]. With the recent enforcement of the General Data Protection Regulation laws 4 , individuals have the right to know the rationale behind an automated decision concerning them.…”
Section: Introductionmentioning
confidence: 99%
“…The use of big data to train AI-models has, to name a few, been adopted by government agencies, institutions of law, human resources, and medical decision systems. A number of such models have been shown to make decisions based on the gender or ancestral origin of an individual, leading to concerns about their "fairness" [15,16,17]. With the recent enforcement of the General Data Protection Regulation laws 4 , individuals have the right to know the rationale behind an automated decision concerning them.…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, after network training it is very difficult to get back into the feature-set used to define the model. For these reasons, this study was based on machine learning algorithms, which are more transparent and explainable compared to deep learning ones [17].…”
Section: Introductionmentioning
confidence: 99%
“…(explanation, justification, transparency, etc.) Several recent papers have tried to clarify those expressions [20,3,21,4,22,2,23,5] and separate the various approaches in two goals: build interpretable models and/or provide justification of the prediction. [24] for instance, described an interpretable proxy (a decision tree) able to explain the logic of each prediction of a pretrained convolutional neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…Reliably evaluating the quality of an explanation is not straightforward [3,4,2,5]. In this work, we propose to evaluate the explainability power of the semantic bottleneck by measuring its capacity to detect failure of the prediction function, either through an automated detector as [6], or through human judgment.…”
Section: Introductionmentioning
confidence: 99%