2018
DOI: 10.48550/arxiv.1811.11839
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
54
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(55 citation statements)
references
References 0 publications
1
54
0
Order By: Relevance
“…Global explanations try to provide an overview of how a model generates its outputs [2,32]. Many explainable systems designed for data experts have focused on visualizing models as global explanations [57]. For instance, Hohman et al [32] built an interactive visual system to summarize and visualize deep-learning models and show how much each layer and what features were used to make predictions.…”
Section: Explanation Meaningfulness and Veracitymentioning
confidence: 99%
See 1 more Smart Citation
“…Global explanations try to provide an overview of how a model generates its outputs [2,32]. Many explainable systems designed for data experts have focused on visualizing models as global explanations [57]. For instance, Hohman et al [32] built an interactive visual system to summarize and visualize deep-learning models and show how much each layer and what features were used to make predictions.…”
Section: Explanation Meaningfulness and Veracitymentioning
confidence: 99%
“…Effective evaluation of XAI systems is challenging because it must not only assess how the addition of explanations can improve user understanding and trust in the system but also whether improvements in understanding allows users to work more efficiently [57]. Human evaluation should also aim to understand which aspects or types of explanations aid human understanding-especially when several types of explanations are provided to the user.…”
Section: Introductionmentioning
confidence: 99%
“…etc., which toughens the comprehensive analysis. These bottlenecks undermine not only any qualitative assessment but also quantitative metrics requiring user interactions [29]. Furthermore, expressing the quantitative metrics in user-understandable terminologies [28] is fundamental to achieve interpretability [30], [31].…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, expressing the quantitative metrics in user-understandable terminologies [28] is fundamental to achieve interpretability [30], [31]. To this end, the most popular quantitative metric, explainer fidelity [26], [29], [32]- [35], is not satisfactory. Moreover, explainers intrinsically maintain high-fidelity, e.g., GNNEXPLAINER [27] produces an explanation to match the GNN's prediction on the original graph.…”
Section: Introductionmentioning
confidence: 99%
“…
Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2].
…”
mentioning
confidence: 99%