2022
DOI: 10.48550/arxiv.2201.12440
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Certifying Model Accuracy under Distribution Shifts

Abstract: Certified robustness in machine learning has primarily focused on adversarial perturbations of the input with a fixed attack budget for each point in the data distribution. In this work, we present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution. We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation. Our framework allows the datu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…Analytical works in these areas [6,5,47,61,49,78,36,10] have shown that model's generalization performance remains high under distribution shifts when the training (source) and test (target) distributions are close under certain divergence measures. Distributional divergence measures studied in previous works include the Wasserstein distance [62,40,39,26], maximum mean discrepancy [63], f -divergence [7,70], H-divergence [6,1]. Since generalization to arbitrary domains is not possible, previous works make additional assumptions on the unseen domains such as [9] assumes that the source and target distributions are derived from the same hyper-distribution, [1,38] assumes that the target distributions belong to the convex hull of the source distribution(s), [40] considers shifts generated by different parameterized transformations.…”
Section: Domain Generalization and Domain Adaptationmentioning
confidence: 99%
See 4 more Smart Citations
“…Analytical works in these areas [6,5,47,61,49,78,36,10] have shown that model's generalization performance remains high under distribution shifts when the training (source) and test (target) distributions are close under certain divergence measures. Distributional divergence measures studied in previous works include the Wasserstein distance [62,40,39,26], maximum mean discrepancy [63], f -divergence [7,70], H-divergence [6,1]. Since generalization to arbitrary domains is not possible, previous works make additional assumptions on the unseen domains such as [9] assumes that the source and target distributions are derived from the same hyper-distribution, [1,38] assumes that the target distributions belong to the convex hull of the source distribution(s), [40] considers shifts generated by different parameterized transformations.…”
Section: Domain Generalization and Domain Adaptationmentioning
confidence: 99%
“…Distributional divergence measures studied in previous works include the Wasserstein distance [62,40,39,26], maximum mean discrepancy [63], f -divergence [7,70], H-divergence [6,1]. Since generalization to arbitrary domains is not possible, previous works make additional assumptions on the unseen domains such as [9] assumes that the source and target distributions are derived from the same hyper-distribution, [1,38] assumes that the target distributions belong to the convex hull of the source distribution(s), [40] considers shifts generated by different parameterized transformations. Another line of work considers learning a representation space by minimizing different divergence measures between the source distributions [1,73,25,79,55,30].…”
Section: Domain Generalization and Domain Adaptationmentioning
confidence: 99%
See 3 more Smart Citations