2021
DOI: 10.48550/arxiv.2103.08902
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Learning Under Triage

Abstract: Multiple lines of evidence suggest that predictive models may benefit from algorithmic triage. Under algorithmic triage, a predictive model does not predict all instances but instead defers some of them to human experts. However, the interplay between the prediction accuracy of the model and the human experts under algorithmic triage is not well understood. In this work, we start by formally characterizing under which circumstances a predictive model may benefit from algorithmic triage. In doing so, we also de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…However, empirical findings regarding the success and effectiveness of these proposals are mixed (Lai et al, 2021, and references therein). Simultaneously, a growing body of theoretical work has attempted to conceptualize and formalize these hybrid designs (Gao et al, 2021;Bordt and von Luxburg, 2020) and study optimal ways of aggregating human and ML judgments within them (Madras et al, 2018;Mozannar and Sontag, 2020;Wilder et al, 2020;Keswani et al, 2021;Raghu et al, 2019;Okati et al, 2021;Donahue et al, 2022;Steyvers et al, 2022). The existing theories, however, are hard to navigate and make sense of as a whole.…”
Section: Introductionmentioning
confidence: 99%
“…However, empirical findings regarding the success and effectiveness of these proposals are mixed (Lai et al, 2021, and references therein). Simultaneously, a growing body of theoretical work has attempted to conceptualize and formalize these hybrid designs (Gao et al, 2021;Bordt and von Luxburg, 2020) and study optimal ways of aggregating human and ML judgments within them (Madras et al, 2018;Mozannar and Sontag, 2020;Wilder et al, 2020;Keswani et al, 2021;Raghu et al, 2019;Okati et al, 2021;Donahue et al, 2022;Steyvers et al, 2022). The existing theories, however, are hard to navigate and make sense of as a whole.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, Okati et al (2021) proposes a method that iteratively optimizes the classifier on points where it outperforms the human on the training sample, and then learns a post-hoc rejector to predict who between the human and the AI has higher error on each point. The setting when the cost of deferral is constant, has a long history in machine learning and goes by the name of rejection learning (Cortes et al, 2016;Chow, 1970;Bartlett and Wegkamp, 2008;Charoenphakdee et al, 2021) or selective classification (only predict on x% of data) (El-Yaniv and Wiener, 2010;Geifman and El-Yaniv, 2017;Gangrade et al, 2021;Acar et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Failure of Prior Approaches. Existing literature has focused on surrogate loss functions for deferral (Madras et al, 2018;Mozannar and Sontag, 2020;Verma and Nalisnick, 2022) and confidence based approaches (Raghu et al, 2019;Okati et al, 2021). We give a simple synthetic setting where all of these approaches fail to find a classifier/rejector pair with low system error.…”
Section: Introductionmentioning
confidence: 99%
“…In crowd-sourcing, classification models have been used to automatically filter examples to improve human annotation efficiency [17,38,2,5]. A similar line of research focuses on algorithmic deferral techniques where a model defers to human predictions based on the model's confidence [39,40], as well as work on adapting prediction models to the human decision maker [41,42,43,44]. The results in [41] in particular describe experiments with the same CIFAR-10H dataset that we use in this paper.…”
Section: Ensembles and Opinion Poolsmentioning
confidence: 99%