2013
DOI: 10.1007/s10994-013-5355-6
|View full text |Cite
|
Sign up to set email alerts
|

Conditional validity of inductive conformal predictors

Abstract: Conformal predictors are set predictors that are automatically valid in the sense of having coverage probability equal to or exceeding a given confidence level. Inductive conformal predictors are a computationally efficient version of conformal predictors satisfying the same property of validity. However, inductive conformal predictors have been only known to control unconditional coverage probability. This paper explores various versions of conditional validity and various ways to achieve them using inductive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
217
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 159 publications
(218 citation statements)
references
References 16 publications
1
217
0
Order By: Relevance
“…Conformal predictors are unconditional by default, i.e., while the probability of making an error for an arbitrary test pattern is , it is possible that errors are distributed unevenly amongst different natural subgroups in the test data, e.g., test patterns with different class labels [7,11,13]. If the output of a test pattern is easily predicted, e.g., because it belongs to the majority class, the probability of making an erroneous prediction on that test pattern might be lower than , while the opposite might be true for difficult test patterns, e.g., those belonging to the minority class.…”
Section: Conformal Classifier Errorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Conformal predictors are unconditional by default, i.e., while the probability of making an error for an arbitrary test pattern is , it is possible that errors are distributed unevenly amongst different natural subgroups in the test data, e.g., test patterns with different class labels [7,11,13]. If the output of a test pattern is easily predicted, e.g., because it belongs to the majority class, the probability of making an erroneous prediction on that test pattern might be lower than , while the opposite might be true for difficult test patterns, e.g., those belonging to the minority class.…”
Section: Conformal Classifier Errorsmentioning
confidence: 99%
“…Conditional (or Mondrian) conformal classifiers [11,13] effectively let us fix 0 and 1 such that = 0 = 1 by making the p-values conditional on the class labels of the calibration examples and test patterns. This is accomplished by slightly modifying the p-value equation, so that only calibration examples that share output labels with the test pattern (which is tentatively labeled asỹ) are considered, i.e.,…”
Section: Class-conditional Conformal Classificationmentioning
confidence: 99%
“…Inductive conformal prediction is introduced in the book [6], and is further developed and analyzed in [18]. A new working paper [19] introduces the method of cross-conformal prediction, which is a hybrid of inductive conformal prediction and cross-validation.…”
Section: Related Workmentioning
confidence: 99%
“…Not all of these working papers have been published in scientific journals or proceedings. ICP is introduced in the book [1], and is further developed and analyzed in [8].…”
Section: Related Workmentioning
confidence: 99%