2022
DOI: 10.48550/arxiv.2209.11812
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…Extending it to account for varying AI accuracy would involve a 3-dimensional visual with a third axis on Acc AI . Finally, we might think of cases where the metric of decision-making performance is not accuracy but, for instance, fairness [5].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Extending it to account for varying AI accuracy would involve a 3-dimensional visual with a third axis on Acc AI . Finally, we might think of cases where the metric of decision-making performance is not accuracy but, for instance, fairness [5].…”
Section: Discussionmentioning
confidence: 99%
“…In order to complement the AI system, the human would have to adhere to its recommendations if and only if these recommendations are correct and override them otherwise. Empirical studies have shown, however, that humans are often not able to achieve this type of appropriate reliance 2 [1,4,5,6]. Instead, we often observe that humans either over-or under-rely on AI recommendations, or simply cannot calibrate their reliance.…”
Section: Introductionmentioning
confidence: 95%
See 2 more Smart Citations
“…Empirical evidence supports that explanations aid human cognition in identifying system failures, promoting what is termed calibrated trust [57]. However, a counter-narrative warns against excessive reliance on machine explanations, as users may endorse AI results despite errors [4,61].…”
Section: Introductionmentioning
confidence: 99%