2022
DOI: 10.48550/arxiv.2203.15885
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Split Conformal Prediction for Dependent Data

Abstract: Split conformal prediction is a popular tool to obtain predictive intervals from general statistical algorithms, with few assumptions beyond data exchangeability. We show that coverage guarantees from split CP can be extended to dependent processes, such as the class of stationary β-mixing processes, by adding a small coverage penalty. In particular, we show that the empirical coverage bounds for some β-mixing processes match the order of the bounds under exchangeability. The framework introduced also extends … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 18 publications
(62 reference statements)
0
2
0
Order By: Relevance
“…Generally speaking, examples are assumed to be exchangeable in a CP context. Most pertinent to our work, (Gendler et al 2022;Tibshirani et al 2019;Fisch et al 2021;Cauchois et al 2020;Gibbs and Candès 2021;Oliveira et al 2022) all consider various situations in which the exchangeability of the examples is violated to some extent. (Gendler et al 2022) considers the case in which the test examples may be adversarially attacked (Szegedy et al 2014;Goodfellow, Shlens, and Szegedy 2015;Madry et al 2018); (Tibshirani et al 2019) investigates the situation in which the density ratio between the target domain and the source domain is known; (Fisch et al 2021) studies the few-shot learning setting and assumes that the source domains and the target domain are independent and identically distributed (i.i.d.)…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally speaking, examples are assumed to be exchangeable in a CP context. Most pertinent to our work, (Gendler et al 2022;Tibshirani et al 2019;Fisch et al 2021;Cauchois et al 2020;Gibbs and Candès 2021;Oliveira et al 2022) all consider various situations in which the exchangeability of the examples is violated to some extent. (Gendler et al 2022) considers the case in which the test examples may be adversarially attacked (Szegedy et al 2014;Goodfellow, Shlens, and Szegedy 2015;Madry et al 2018); (Tibshirani et al 2019) investigates the situation in which the density ratio between the target domain and the source domain is known; (Fisch et al 2021) studies the few-shot learning setting and assumes that the source domains and the target domain are independent and identically distributed (i.i.d.)…”
Section: Introductionmentioning
confidence: 99%
“…(Gendler et al 2022) considers the case in which the test examples may be adversarially attacked (Szegedy et al 2014;Goodfellow, Shlens, and Szegedy 2015;Madry et al 2018); (Tibshirani et al 2019) investigates the situation in which the density ratio between the target domain and the source domain is known; (Fisch et al 2021) studies the few-shot learning setting and assumes that the source domains and the target domain are independent and identically distributed (i.i.d.) from some distribution on the domains; (Gibbs and Candès 2021) considers an online learning setting and (Oliveira et al 2022) provides results when the examples are mixing (Achim 2013;Xiaohong, Lars Peter, and Marine 2010;Bin 1994). Different from all the works discussed above, we consider the OOD generalization setting in which the f -divergence between the target domain and the convex hull of the source domains is constrained.…”
Section: Introductionmentioning
confidence: 99%