Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society 2022
DOI: 10.1145/3514094.3534164
|View full text |Cite
|
Sign up to set email alerts
|

How Cognitive Biases Affect XAI-assisted Decision-making

Abstract: The field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to complex AI systems. Although it is usually considered an essentially technical field, effort has been made recently to better understand users' human explanation methods and cognitive constraints. Despite these advances, the community lacks a general vision of what and how cognitive biases affect explainability systems. To address this gap, we present a heuristic map which matches human cognitive biases with explainability tec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
39
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(41 citation statements)
references
References 60 publications
2
39
0
Order By: Relevance
“…However, empirical research reveals several limitations within the existing AI-assisted decision-making framework, wherein AI acts primarily as a recommender. One notable issue is that individuals, when passively receiving AI suggestions, seldom engage in analytical thinking [3,7,38]. Furthermore, people frequently inappropriately rely on the AI's recommendations (such as overreliance and under-reliance) [8,30,33,46] and the mere provision of AI explanations can, paradoxically, exacerbate overreliance [2,37].…”
Section: Introductionmentioning
confidence: 99%
“…However, empirical research reveals several limitations within the existing AI-assisted decision-making framework, wherein AI acts primarily as a recommender. One notable issue is that individuals, when passively receiving AI suggestions, seldom engage in analytical thinking [3,7,38]. Furthermore, people frequently inappropriately rely on the AI's recommendations (such as overreliance and under-reliance) [8,30,33,46] and the mere provision of AI explanations can, paradoxically, exacerbate overreliance [2,37].…”
Section: Introductionmentioning
confidence: 99%
“…Here, we focus on participants' experiences and perceptions in different conditions. Specifically, referring to and adapted from related works, we investigate the following subjective measures as 7-point Likert scale questions in the exit survey (1: Strongly Disagree, 7: Strongly Agree): (1) Trust in AI [14,39]; (2) Confidence in the decision-making process [52,82]; (3) Perceived complexity of the system [14]; (4) Mental demand [14,39,42,52]; (5) Perceived autonomy [44]; (6) Satisfaction [39]; (7) Future use [12]; (8) Trust in the estimation of human-AI CL; (9) Perceived usefulness of estimation of human-AI CL [56]; (10) Perceived helpfulness to decide when to trust the AI [56]); and (11) Acceptance of estimation of their CL. Besides these questions, we also asked participants open-ended questions about how they used and perceived the communicated human-AI CL, and how their decision-making processes were affected by different interface designs.…”
Section: Measures For Rq3mentioning
confidence: 99%
“…Our work proposes leaving the computational probability estimation/comparison task to the system and calibrating human trust by automatically adapting the decision-making process/interface. This can counter possible human cognitive biases [8,94] and avoid making people directly deal with probabilities. Future work could explore two other directions.…”
Section: Human Perceptions Of Self-confidence and Understanding Of Ai...mentioning
confidence: 99%
“…However, recent research has unveiled a potential drawback of providing AI explanations: the risk of increased over-reliance on AI systems when AI provides incorrect suggestions [2,65,84,90]. This phenomenon is attributed to a lack of cognitive engagement with AI explanations, as individuals may opt for quick heuristic judgments, associating explainability with trustworthiness when they lack the motivation or ability for in-depth analysis [3,6].…”
Section: Enhancing Appropriate Reliance In Ai-assisted Decision-makingmentioning
confidence: 99%
“…This yielded a necessary sample size of 232 participants. After obtaining institutional IRB approval, we recruited participants from Prolific 3…”
Section: Participantsmentioning
confidence: 99%