2020
DOI: 10.48550/arxiv.2001.09734
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

Kacper Sokol,
Peter Flach

Abstract: The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability me… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 18 publications
(45 reference statements)
0
6
0
Order By: Relevance
“…Work pertaining to the technical aspects of XAI also identify the importance of such "user-centric" explanations. [100,108]. Numerous papers have emphasized the need for explanations that are tailored to the context, audience and purpose of the explanation [2,32,33,36,90].…”
Section: Interactivity In Explainabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…Work pertaining to the technical aspects of XAI also identify the importance of such "user-centric" explanations. [100,108]. Numerous papers have emphasized the need for explanations that are tailored to the context, audience and purpose of the explanation [2,32,33,36,90].…”
Section: Interactivity In Explainabilitymentioning
confidence: 99%
“…For each paper in their corpus, they documented personalized explanation properties (complexity, content and presentation), personalization granularity (to each user or per category of user) and personalization automation (manual or automatic). Additionally, they observed that personalization of explanations can be either iterative or one-off, with user information being collected once prior to showing explanations [100,108]. While the personalization of explanations is particularly important given the role of explanations in filling one's specific knowledge gaps, we believe there is a greater granularity of interaction to explore beyond the categories mentioned in [100].…”
Section: Interactivity In Explainabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Cheng et al (2019) found that their interactive interface, which allowed users to modify the value of features and see the impact of this change on the prediction of a linear regressor (what if), increased users' objective and self-reported understanding of the ML model compared to a static interface, which did not allow such changes. Sokol and Flach (2020b) studied counterfactual explanations for DTs in an interactive system where users could change or remove features, or request an explanation for a hypothetical instance. Counterfactual explanations were generated by representing a tree structure as binary meta-features, and selecting the shortest statement that minimizes an L1-like metric compared to the DT trace.…”
Section: Related Workmentioning
confidence: 99%
“…Reiter (2019) argued that good explanations must be written for a specific purpose and audience, have a narrative structure, and use vague language to communicate uncertainty. The explanations generated in (Sokol and Flach, 2020b) and (Biran and McKeown, 2017) have a narrative structure, and those in (Biran and McKeown, 2017) use vague language to convey strength of evidence. A different perspective is offered by expectation theory, which posits that the surprisingness of an event may stem from a discrepancy between the state of the world and propositions that are deducible from presented information (Ortony and Partridge, 1987).…”
Section: Related Workmentioning
confidence: 99%