2023
DOI: 10.1145/3579605
|View full text |Cite
|
Sign up to set email alerts
|

Explanations Can Reduce Overreliance on AI Systems During Decision-Making

Abstract: Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that peo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
14
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 78 publications
(19 citation statements)
references
References 63 publications
4
14
1
Order By: Relevance
“…At a more fundamental level, our study raises some caution: while developers and researchers are investing in improving AI models and how LLMs can provide context-specific guidance, users may not always perceive these enhancements as substantial improvements in accuracy and relevance. Recent literature has already raised concerns about users' over-reliance on AI systems, such as in the context of AI-based maze-solving tasks [49]. Although the landscape of user behaviors and mental models is more multifaceted with LLMs, our study demonstrates a similar phenomena of overtrust with LLMs.…”
Section: Transparent and Responsible Interface Design Of Llmsmentioning
confidence: 53%
“…At a more fundamental level, our study raises some caution: while developers and researchers are investing in improving AI models and how LLMs can provide context-specific guidance, users may not always perceive these enhancements as substantial improvements in accuracy and relevance. Recent literature has already raised concerns about users' over-reliance on AI systems, such as in the context of AI-based maze-solving tasks [49]. Although the landscape of user behaviors and mental models is more multifaceted with LLMs, our study demonstrates a similar phenomena of overtrust with LLMs.…”
Section: Transparent and Responsible Interface Design Of Llmsmentioning
confidence: 53%
“…Designers and researchers who use a chatbot to enact certain important roles for the user or the user themselves, must be aware of potential risks of these practices, including inferring identity, the breach of privacy, and how these practices may affect the person and their social environment. For example: Individuals may rely on the judgment on the chatbot [53] neglecting their own feelings or evidence of their surroundings. Therefore, designers of such interventions need to help users to fully understand the limitations of the proposed intervention and be aware of potential biases induced by the underlying technology.…”
Section: Summary Of Resultsmentioning
confidence: 99%
“…However, these potential benefits come with critical challenges. Previous literature has identified several potential issues, such as accountability, especially when AI-guided decisions lead to adverse outcomes, and the risk of overreliance, which could erode human judgement and decision-making abilities [31][32][33]. In this article, we shift focus to an often-overlooked aspect: the linguistic description of the decision context.…”
Section: The Importance Of Language-based Preferences In Human-machin...mentioning
confidence: 99%