2021
DOI: 10.1002/ail2.49
|View full text |Cite
|
Sign up to set email alerts
|

How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking

Abstract: Explainable artificial intelligence (XAI) systems aim to provide users with information to help them better understand computational models and reason about why outputs were generated. However, there are many different ways an XAI interface might present explanations, which makes designing an appropriate and effective interface an important and challenging task. Our work investigates how different types and amounts of explanatory information affect user ability to utilize explanations to understand system beha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 58 publications
0
8
1
Order By: Relevance
“…Although the space of explanation techniques is vast, we opted for variations of simple prediction‐specific transparency methods for users to understand – the most influential keyword and confidence value. Inspired by prior studies [ZLB20, LMY*21, WY21], we posit that providing the most influential keyword and confidence value may improve trust calibration by giving users an indication to increase their situational awareness of the ai 's performance. We define an influential keyword to be one which if eliminated, decreases the probability of a post being relevant by the largest amount.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Although the space of explanation techniques is vast, we opted for variations of simple prediction‐specific transparency methods for users to understand – the most influential keyword and confidence value. Inspired by prior studies [ZLB20, LMY*21, WY21], we posit that providing the most influential keyword and confidence value may improve trust calibration by giving users an indication to increase their situational awareness of the ai 's performance. We define an influential keyword to be one which if eliminated, decreases the probability of a post being relevant by the largest amount.…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, Dietvorst et al [DSM18] communicated a model's uncertainty through the outright disclosure of the model's average error rate by a text description (i.e., “the model has an average error rate of x”). Linder et al [LMY*21] explored how the type and amount of explanations affect users' understanding and performance on a fact‐checking task. More detailed explanations such as providing examples of alternative statements with the same classification and information about the influence of the statement's metadata led the users to a better understanding of the ai suggestions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…While many studies assess the effectiveness of an explanation type, content and presentation medium, they often overlook the inherent correctness and trustworthiness of the underlying explanatory insight, or rather the pervasive lack thereof [43,65,80]. This inspires an alternative, diagnostic conceptualisation of XAI, which focuses on providing users with rigorously tested and well-specified insights into a predictive model instead of attempting to solve the ill-defined "black box" problem [14].…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
“…In conclusion, achieving a high level of transparency is not always beneficial to improving the user's understanding [5,81]. Indeed, providing complex or a large number of explanations would generate a trade-off between their understandability and the time required by human interpreters to interpret them [42,90]. Consequently, it is necessary to comprehend the proper level of transparency, explanation complexity and quantity, even in simple cases [91].…”
Section: Understanding the Human's Perspective In Explainable Aimentioning
confidence: 99%