26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450644
|View full text |Cite
|
Sign up to set email alerts
|

I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI

Abstract: Unintended consequences of deployed AI systems fueled the call for more interpretability in AI systems. Often explainable AI (XAI) systems provide users with simplifying local explanations for individual predictions but leave it up to them to construct a global understanding of the model behavior. In this work, we examine if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations. We applied a mixed methods approach consisting of a moderated study with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(53 citation statements)
references
References 44 publications
1
27
0
Order By: Relevance
“…Model-agnostic local explanation methods, such as Shapley additive explanations (SHAP) ( Lundberg and Lee, 2017 ) and Local interpretable model-agnostic explanations (LIME) ( Ribeiro et al, 2016 ) have the potential to overcome this issue due to how the methods consistently and transparently quantify the input’s effect on prediction across most model types ( Lundberg et al, 2018 ). However, this leads back to the original criticism of describing the extraction of meaning post-hoc from the black box as a practice with potential bias [57], the ability to purposely engineer explanations ( Slack et al, 2020 ) and the likelihood of false conclusions being made by inexperienced users ( Chromik et al, 2021 ).…”
Section: Explainable (Interpretable) Machine Learning For Genotype To...mentioning
confidence: 99%
“…Model-agnostic local explanation methods, such as Shapley additive explanations (SHAP) ( Lundberg and Lee, 2017 ) and Local interpretable model-agnostic explanations (LIME) ( Ribeiro et al, 2016 ) have the potential to overcome this issue due to how the methods consistently and transparently quantify the input’s effect on prediction across most model types ( Lundberg et al, 2018 ). However, this leads back to the original criticism of describing the extraction of meaning post-hoc from the black box as a practice with potential bias [57], the ability to purposely engineer explanations ( Slack et al, 2020 ) and the likelihood of false conclusions being made by inexperienced users ( Chromik et al, 2021 ).…”
Section: Explainable (Interpretable) Machine Learning For Genotype To...mentioning
confidence: 99%
“…To be precise, these rules were counterfactual and of the format "if the alcohol intake would have been 1 unit or less, the system would have advised a normal dose of insulin". Furthermore, Chromik et al (2021) found that users generalize from a collection of (contrasting, so including counterfactuals) local (Shapley) explanations, and typically do this incorrectly. That further supports that a pure case-by-case approach, where counterfactuals are presented but without overarching generalizations, doesn't truly explain the functioning of an algorithm.…”
Section: Counterfactuals Alonementioning
confidence: 99%
“…Usages. Researchers have conducted user-studies on the use of certain explanations for certain stakeholders and data types [2,12,13,23,25,52]. Yet, no extensive work involves developers debugging computer vision models.…”
Section: Machine Learning Explainabilitymentioning
confidence: 99%