2023
DOI: 10.48550/arxiv.2302.06503
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making

Abstract: A growing literature on human-AI decision-making investigates strategies for combining human judgment with statistical models to improve decision-making. Research in this area often evaluates proposed improvements to models, interfaces, or workflows by demonstrating improved predictive performance on "ground truth" labels. However, this practice overlooks a key difference between human judgments and model predictions. Whereas humans reason about broader phenomena of interest in a decision -including latent con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 74 publications
(190 reference statements)
0
3
0
Order By: Relevance
“…Instead they are inferred indirectly via proxies: measurements of properties that are observed in the data available to a model. The process of defining proxy variables for a construct of interest necessarily involves making simplifying assumptions, and there is often a considerable conceptual distance between ML proxies and the ways human decision-makers think about the targeted construct (Green and Chen 2021;Guerdan et al 2023;Jacobs and Wallach 2021;Kawakami et al 2022). In other words, O H (X, a) ̸ = O M (X, a).…”
Section: Task Definitionmentioning
confidence: 99%
See 2 more Smart Citations
“…Instead they are inferred indirectly via proxies: measurements of properties that are observed in the data available to a model. The process of defining proxy variables for a construct of interest necessarily involves making simplifying assumptions, and there is often a considerable conceptual distance between ML proxies and the ways human decision-makers think about the targeted construct (Green and Chen 2021;Guerdan et al 2023;Jacobs and Wallach 2021;Kawakami et al 2022). In other words, O H (X, a) ̸ = O M (X, a).…”
Section: Task Definitionmentioning
confidence: 99%
“…Much prior work has studied settings where the ML model outperforms the human decision-maker. These studies are frequently focused on tasks where there are no reasons to expect upfront that the human and the ML model will have complementary strengths (Bansal et al 2021;Guerdan et al 2023;Holstein and Aleven 2021;Lurie and Mulligan 2020). For example, some experimental studies employ untrained crowdworkers on tasks that require extensive domain expertise, without which there is no reason to expect that novices would have complementary strengths (Fogliato, Chouldechova, and Lipton 2021;Lurie and Mulligan 2020;Rastogi et al 2022).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation