2021
DOI: 10.48550/arxiv.2111.03135
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scaffolding Sets

Abstract: Predictors map individual instances in a population to the interval [0, 1]. For a collection C of subsets of a population, a predictor is multi-calibrated with respect to C if it is simultaneously calibrated on each set in C. We initiate the study of the construction of scaffolding sets, a small collection S of sets with the property that multi-calibration with respect to S ensures correctness, and not just calibration, of the predictor. Our approach is inspired by the folk wisdom that the intermediate layers … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…Self-Supervised Contrastive Learning Another closely related line of research is unimodal self-supervised contrastive learning (SSCL) for unimodal data. Representation learning has been crucial in modern machine learning Bengio et al (2013); Burhanpurkar et al (2021); Zhang et al (2021); Deng et al (2021); Kawaguchi et al (2022). SSCL is a group of self-supervised learning algorithms that learn representations by contrasting two views generated by data augmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Self-Supervised Contrastive Learning Another closely related line of research is unimodal self-supervised contrastive learning (SSCL) for unimodal data. Representation learning has been crucial in modern machine learning Bengio et al (2013); Burhanpurkar et al (2021); Zhang et al (2021); Deng et al (2021); Kawaguchi et al (2022). SSCL is a group of self-supervised learning algorithms that learn representations by contrasting two views generated by data augmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Roughly concurrently, work on 'multi-calibration' has suggested explicitly taking into account input information for selecting groups: The original paper on multi-calibration (Hébert-Johnson et al, 2018) proposed to look at all computable groups -a proposal that has also been made in the sequence setting by (Dawid, 1985). But even approaches based on similarity of learned representations either prescribe that the selected groups have equal predictions (Burhanpurkar et al, 2021) or apply additional binning in prediction space (Luo et al, 2022). 5 Approaches proposing to omit this additional binning are sometimes referred to as 'multi-accuracy' (Hébert-Johnson et al, 2018;Kim, Ghorbani, and Zou, 2019) but we are not aware of a thorough discussion of this choice.…”
Section: Conventional Understanding Of Calibrationmentioning
confidence: 99%
“…One alternative, suggested in (Burhanpurkar et al, 2021), would be to consider all efficiently computable groups.…”
Section: Grouping Choicesmentioning
confidence: 99%
See 1 more Smart Citation
“…Beyond fairness, multicalibration and OI also provide strong accuracy guarantees [see e.g. , Blum and Lykouris, 2020, Zhao, Kim, Sahoo, Ma, and Ermon, 2021, Gopalan, Kalai, Reingold, Sharan, and Wieder, 2022, Kim, Kern, Goldwasser, Kreuter, and Reingold, 2022, Burhanpurkar, Deng, Dwork, and Zhang, 2021. For a general predictor class P and a subpopulation class C, Shabat, Cohen, and Mansour [2020] showed sample complexity upper bounds of uniform convergence for multicalibration based on the maximum of suitable complexity measures of C and P. They complemented this result with a lower bound which does not grow with C and P. In comparison, we focus on the weaker no-access OI setting where the sample complexity can be much smaller, and we provide matching upper and lower bounds in terms of the dependence on D and P.…”
Section: Related Workmentioning
confidence: 99%