2021
DOI: 10.48550/arxiv.2112.11471
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

Abstract: As AI systems demonstrate increasingly strong predictive performance, their adoption has grown in numerous domains. However, in high-stakes domains such as criminal justice and healthcare, full automation is often not desirable due to safety, ethical, and legal concerns, yet fully manual approaches can be inaccurate and time consuming. As a result, there is growing interest in the research community to augment human decision making with AI assistance. Besides developing AI technologies for this purpose, the em… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 43 publications
(67 citation statements)
references
References 73 publications
(424 reference statements)
4
63
0
Order By: Relevance
“…Recent work in human-AI decision-making has started to discuss AR in the context of AI advice. Lai et al [14] gives an overview of empirical studies that analyze AI advice considering AR. For example, Chandrasekaran et al [4] analyze whether humans can learn to predict the model behavior.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent work in human-AI decision-making has started to discuss AR in the context of AI advice. Lai et al [14] gives an overview of empirical studies that analyze AI advice considering AR. For example, Chandrasekaran et al [4] analyze whether humans can learn to predict the model behavior.…”
Section: Related Workmentioning
confidence: 99%
“…For illustration, we focus on the explainability of AI advice as a design decision. XAI is intensively discussed in research with regards to its impact on human-AI decision-making in general and AR in specific [1,2,14,29].…”
Section: Illustration Of Appropriate Reliance On Ai Advicementioning
confidence: 99%
“…Building on a recent survey [32], we identify 30 papers that: 1) use machine learning models and explanations with the goal of improving human understanding; and 2) conduct empirical human studies to evaluate human understanding with quantitative metrics. Although human-subject experiments can vary in subtle details, the three concepts allow us to organize existing work into congruent categories.…”
Section: Three Core Concepts For Measuring Human Understandingmentioning
confidence: 99%
“…We consider two conditions: 1) whether a person has the perfect knowledge of the task and 2) whether machine predicted labels are revealed. Going through these conditions yields a decision tree that describes different scenarios for human-AI decision-making [32].…”
Section: Introductionmentioning
confidence: 99%
“…We base our proposal on currently popular AI systems (e.g., decision support, task assistance, recommender systems) and common system features in production and literature. A recent survey paper [32] maps out AI system elements that have been empirically studied for AI decision support in HCI and AI literature, including different types of prediction output, information about the prediction (e.g., local explanations, uncertainty information), information about the model (e.g., performance metrics, documentation, model-wide explanations, training data), and user control features (e.g., customization, feedback to improve the model). Accordingly, we suggest three types of common affordances of AI systems: AI-generated content, transparency, and interaction.…”
Section: Affordances For Trustworthiness Cues: How Is Trustworthiness...mentioning
confidence: 99%