Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 2020
DOI: 10.1145/3351095.3372852
|View full text |Cite
|
Sign up to set email alerts
|

Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making

Abstract: Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key to their succe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
380
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 470 publications
(393 citation statements)
references
References 29 publications
10
380
3
Order By: Relevance
“…To answer these questions, researchers have been advocating for moving beyond defining what constitutes a "good" explanation using model designer's intuition but actually examining how useful an explanation is with human users [24,59]. In responding to this call, there is recently a growing line of literature on empirically evaluating the effectiveness of XAI methods (e.g., [14,16,41,69,71]). Yet, principles required for an explanation to be considered helpful in AI-assisted decision making, arguably, still remain to be articulated and comprehensively assessed.…”
Section: Introductionmentioning
confidence: 99%
“…To answer these questions, researchers have been advocating for moving beyond defining what constitutes a "good" explanation using model designer's intuition but actually examining how useful an explanation is with human users [24,59]. In responding to this call, there is recently a growing line of literature on empirically evaluating the effectiveness of XAI methods (e.g., [14,16,41,69,71]). Yet, principles required for an explanation to be considered helpful in AI-assisted decision making, arguably, still remain to be articulated and comprehensively assessed.…”
Section: Introductionmentioning
confidence: 99%
“…The XSA application automatically provides the scores for sentiments that can be detected on customers' tweets, as well as explanations for the predictions. However, for relying on such predictions, those professionals need to rely on and trust them (Zhang et al, 2020). Therefore, the next sections present the development and evaluation of an XSA application, regarding the two tasks of those professionals in this scenario, which are aligned with our research propositions.…”
Section: Crisis Management Scenariomentioning
confidence: 94%
“…They found that showing predictions of an AI model resulted in a 21% accuracy improvement and showing the predictions along with confidence scores resulted in a 46% relative improvement. In a similar study, Zhang et al [42] measured improvement in users' trust when provided with prediction and confidence scores but found no significant improvement in accuracy. The authors attributed the insignificant accuracy gain to the human and AI having little performance divergence on the task.…”
Section: Related Workmentioning
confidence: 98%
“…Users are optimistic about the incorporation of AI-assisted decision making in data science [35] and pedagogical tools [1,31] and researchers are interested in the outcome of these interactions [21,35]. Joint human-AI decision making outcomes can be improved if user trust towards the AI is calibrated appropriately [42] and when there is an appropriate level of contribution from each party in a human-AI collaborative context [18,19]. An implicit part of AI-assisted decision making is how the AI system's results are presented to the user.…”
Section: Related Workmentioning
confidence: 99%