2020
DOI: 10.48550/arxiv.2008.09643
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Privacy Preserving Recalibration under Domain Shift

Rachel Luo,
Shengjia Zhao,
Jiaming Song
et al.

Abstract: Classifiers deployed in high-stakes real-world applications must output calibrated confidence scores, i.e. their predicted probabilities should reflect empirical frequencies. Recalibration algorithms can greatly improve a model's probability estimates; however, existing algorithms are not applicable in real-world situations where the test data follows a different distribution from the training data, and privacy preservation is paramount (e.g. protecting patient records). We introduce a framework that abstracts… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 34 publications
(44 reference statements)
0
1
0
Order By: Relevance
“…Our work is not the first to study calibration under learning with DP, but we provide a more comprehensive characterization of privacy-calibration tradeoffs and solutions that improve this tradeoff which are both simpler and more effective. Luo et al [24] studied private calibration for out-of-domain settings, but did not study whether DP-SGD causes miscalibration in-domain. Angelopoulos et al [25] modified split conformal prediction to be privacy-preserving, but they only studied vision models and their private models have substantial performance decrease compared to non-private ones.…”
Section: Related Workmentioning
confidence: 99%
“…Our work is not the first to study calibration under learning with DP, but we provide a more comprehensive characterization of privacy-calibration tradeoffs and solutions that improve this tradeoff which are both simpler and more effective. Luo et al [24] studied private calibration for out-of-domain settings, but did not study whether DP-SGD causes miscalibration in-domain. Angelopoulos et al [25] modified split conformal prediction to be privacy-preserving, but they only studied vision models and their private models have substantial performance decrease compared to non-private ones.…”
Section: Related Workmentioning
confidence: 99%