2021
DOI: 10.48550/arxiv.2109.05389
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Omnipredictors

Parikshit Gopalan,
Adam Tauman Kalai,
Omer Reingold
et al.

Abstract: Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?"). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. Alternatively, the same clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…[Corbett-Davies et al, 2017,Canetti et al, 2019 study the power of post-processing calibrated scores into decisions when the objective is to equalize certain statistical fairness notions across subgroups, which is not our objective. More related to our work are results in [Natarajan et al, 2015, Dembczy ński et al, 2017 showing that for accuracy metrics including F-scores and AUC, applying the optimal post-processing transformation to the 2 -loss minimizing predictor in a class H yields a classifier competitive with the best classifier in the class of binary classifiers derived by thresholding models from H. Recently, [Gopalan et al, 2021] proposed the notion of an omnipredictor, which takes this idea one step further and seeks a single predictor that can be post-processed to be competitive with a large collection of loss functions and w.r.t arbitrary classes of functions. Their work leverages connections to the notion of multicalibration to prove that this objective is computationally feasible: there exists a predictor p with the guarantee that for every convex loss function, applying the optimal post-processing transformation for p to p yields an optimal classifier for this loss, albeit with a degradation in performance that depends on the Lipchitzness of the loss function in question.…”
Section: Further Related Workmentioning
confidence: 52%
“…[Corbett-Davies et al, 2017,Canetti et al, 2019 study the power of post-processing calibrated scores into decisions when the objective is to equalize certain statistical fairness notions across subgroups, which is not our objective. More related to our work are results in [Natarajan et al, 2015, Dembczy ński et al, 2017 showing that for accuracy metrics including F-scores and AUC, applying the optimal post-processing transformation to the 2 -loss minimizing predictor in a class H yields a classifier competitive with the best classifier in the class of binary classifiers derived by thresholding models from H. Recently, [Gopalan et al, 2021] proposed the notion of an omnipredictor, which takes this idea one step further and seeks a single predictor that can be post-processed to be competitive with a large collection of loss functions and w.r.t arbitrary classes of functions. Their work leverages connections to the notion of multicalibration to prove that this objective is computationally feasible: there exists a predictor p with the guarantee that for every convex loss function, applying the optimal post-processing transformation for p to p yields an optimal classifier for this loss, albeit with a degradation in performance that depends on the Lipchitzness of the loss function in question.…”
Section: Further Related Workmentioning
confidence: 52%