2022
DOI: 10.1007/s00146-022-01536-6
|View full text |Cite
|
Sign up to set email alerts
|

An explanation space to align user studies with the technical development of Explainable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 63 publications
0
6
0
Order By: Relevance
“…As some researchers note, an explanation can be correct but at the same time it may not be any good or useful (for a given task) [1], or simply put "one explanation does not fit all" [69]. To be effective, explanations should align with the explainees' objectives and provide them with the required information, thus be helpful in a given situation as outside of it the explanatory insights may simply become irrelevant or meaningless [12,52]. This crucial role of context, nonetheless, seems to be rarely acknowledged when choosing metrics and protocols for evaluating XAI techniques, which appears to be one of the main reasons for the inconsistent findings highlighted in the previous paragraph.…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
See 1 more Smart Citation
“…As some researchers note, an explanation can be correct but at the same time it may not be any good or useful (for a given task) [1], or simply put "one explanation does not fit all" [69]. To be effective, explanations should align with the explainees' objectives and provide them with the required information, thus be helpful in a given situation as outside of it the explanatory insights may simply become irrelevant or meaningless [12,52]. This crucial role of context, nonetheless, seems to be rarely acknowledged when choosing metrics and protocols for evaluating XAI techniques, which appears to be one of the main reasons for the inconsistent findings highlighted in the previous paragraph.…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
“…Triggered by expectation failures (e.g., a surprising event), learning needs or task requirements, it allows to adjust misconceptions, reconcile differences and resolve disagreements by helping a person to develop correct knowledge, beliefs and understanding of the explained phenomenon [20]. Such a perspective entails a separation between • the explanatory artefacts, i.e., the type, content, presentation format and provision mechanism of an explanation determined by what needs to be explained, and • the act of explaining, i.e., the explanatory process governed by how to explain it, which draws an implicit distinction between user interfaces and explainability [1,12,20,31,33,39].…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
“…Another area of inquiry is measuring the adequacy of explanations [37]. The insights from "explanation science" can be used to build better explanations [35,38,46]. Since explanation is a social transaction between the explainer and the end user, additional research [35,46] is aimed at tailoring explanations to the cognitive landscape of the end user (i.e.…”
Section: Progress In Xaimentioning
confidence: 99%
“…These methods have been applied to improve explainability in different contexts [13,20,[41][42][43][44][45]. We call these explanation methods explanation agents (see [46]). Das and Rad [33] have reviewed different approaches to creating explanation agents, including perturbation-based methods (SHAP, LIME) and gradient-based methods (saliency maps).…”
Section: Progress In Xaimentioning
confidence: 99%
See 1 more Smart Citation