2021
DOI: 10.48550/arxiv.2101.01739
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Multivalid Learning: Means, Moments, and Prediction Intervals

Abstract: We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples (x, y). This means that the resulting estimates correctly predict various statistics of the labels y not just marginally -as averaged over the sequence of examples -but also conditionally on x ∈ G for any G belonging to an arbitrary intersecting collection of groups G.We provide three instantiations of this framework. The first is me… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…A notable exception to the rule that fairness and accuracy must involve tradeoffs, from which we take inspiration, is the literature on multicalibration initiated by Hébert-Johnson et al [Hébert-Johnson et al, 2018, Kim et al, 2019, Gupta et al, 2021, Dwork et al, 2021 that asks that a model's predictions be calibrated not just overall, but also when restricted to a large number of protected subgroups g. Hébert-Johnson et al [Hébert-Johnson et al, 2018] and Kim, Ghorbani, and Zou [Kim et al, 2019] show that an arbitrary model f can be postprocessed to satisfy multicalibration (and the related notion of "multi-accuracy") without sacrificing (much) in terms of model accuracy. Our aim is to achieve something similar, but for predictive error, rather than model calibration.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A notable exception to the rule that fairness and accuracy must involve tradeoffs, from which we take inspiration, is the literature on multicalibration initiated by Hébert-Johnson et al [Hébert-Johnson et al, 2018, Kim et al, 2019, Gupta et al, 2021, Dwork et al, 2021 that asks that a model's predictions be calibrated not just overall, but also when restricted to a large number of protected subgroups g. Hébert-Johnson et al [Hébert-Johnson et al, 2018] and Kim, Ghorbani, and Zou [Kim et al, 2019] show that an arbitrary model f can be postprocessed to satisfy multicalibration (and the related notion of "multi-accuracy") without sacrificing (much) in terms of model accuracy. Our aim is to achieve something similar, but for predictive error, rather than model calibration.…”
Section: Related Workmentioning
confidence: 99%
“…Remark 6. We have chosen to define approximate Bayes optimality by letting the approximation term scale proportionately to the inverse probability of the group g, similar to how notions of multigroup fairness are defined in [Kearns et al, 2018, Gupta et al, 2021]. An alternative (slightly weaker) option would be to require error that is uniformly bounded by for all groups, but to only make promises for groups g that have probability µ g larger than some threshold, as is done in [Hébert-Johnson et al, 2018].…”
Section: Preliminariesmentioning
confidence: 99%
“…In particular, Hébert-Johnson et al introduced the method for constructing efficient multi-calibrated predictors in the batch setting [31]. Multi-calibration in the online setting is implicit in [52], [37] extends the notion of multi-calibration to higher moments and provides constructions, and [28] provides an efficient online solution. Multi-calibration has also been applied to solve several flavors of problems: fair ranking [13]; ommiprediction, which is roughly learning a predictor that, for a given class of loss functions, can be post-processed to minimize loss for any function in the class [23]; and providing an alternative to propensity scoring for the purposes of generalization to future populations [39].…”
Section: Additional Related Workmentioning
confidence: 99%
“…because the Bayes optimal predictor always satisfies multi-group fairness (regardless of G). Notable examples include multi-calibration [Hébert-Johnson et al, 2018] and multi-accuracy , with subsequent works studying extensions in ranking [Dwork et al, 2019], regression [Jung et al, 2020] and online learning [Gupta et al, 2021]. As discussed above, Blum and Lykouris [Blum and Lykouris, 2019] study multi-group agnostic PAC learning (which, depending on the loss function, is often similarly aligned with accuracy) in the online setting.…”
Section: Related Workmentioning
confidence: 99%