2022
DOI: 10.48550/arxiv.2209.08436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Estimating and Explaining Model Performance When Both Covariates and Labels Shift

Abstract: Deployed machine learning (ML) models often encounter new user data that differs from their training data. Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This is very challenging, however, as the data distribution can change in flexible ways, and we may not have any labels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(22 citation statements)
references
References 15 publications
0
22
0
Order By: Relevance
“…Assumption 3.1, complemented by Assumption 3.7 below, can be reconciled with the setting of Chen et al (2022) in the following way:…”
Section: Settingmentioning
confidence: 99%
See 4 more Smart Citations
“…Assumption 3.1, complemented by Assumption 3.7 below, can be reconciled with the setting of Chen et al (2022) in the following way:…”
Section: Settingmentioning
confidence: 99%
“…The notion of Sparse Joint Shift (SJS) was introduced by Chen et al (2022) as a tractable model of dataset shift "which considers the joint shift of both labels and a few features". In this paper, we reanalyse the notion in some depth, looking closer at its connection to prior probability shift and the link between SJS and covariate shift.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations