2016
DOI: 10.1007/978-3-319-31753-3_7
|View full text |Cite
|
Sign up to set email alerts
|

Reliable Confidence Predictions Using Conformal Prediction

Abstract: Abstract. Conformal classifiers output confidence prediction regions, i.e., multi-valued predictions that are guaranteed to contain the true output value of each test pattern with some predefined probability. In order to fully utilize the predictions provided by a conformal classifier, it is essential that those predictions are reliable, i.e., that a user is able to assess the quality of the predictions made. Although conformal classifiers are statistically valid by default, the error probability of the predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 12 publications
0
13
0
Order By: Relevance
“…In this paper, we further refine the work presented in [6], and propose a more flexible method of producing an intuitive interpretation of the predictions produced by a conformal classifier. We remove the dependency on , and replace it with a new parameter, k, that denotes the maximum expected number of errors that we wish the classifier to make on the test set.…”
Section: Introductionmentioning
confidence: 90%
See 2 more Smart Citations
“…In this paper, we further refine the work presented in [6], and propose a more flexible method of producing an intuitive interpretation of the predictions produced by a conformal classifier. We remove the dependency on , and replace it with a new parameter, k, that denotes the maximum expected number of errors that we wish the classifier to make on the test set.…”
Section: Introductionmentioning
confidence: 90%
“…However, while conformal predictors are able to supply users with an appropriate estimate of error probability, the validity of a conformal classifier holds only a priori, i.e., before the prediction is made. After observing a particular prediction, it is no longer automatically correct to interpret as a well-calibrated error probability for any particular prediction, which leads to conformal classifiers instead requiring predictions to be interpreted in a manner that is potentially counter-intuitive to a user less familiar with p-value statistics [6]. Specifically, some prediction regions are always guaranteed to be correct (because they contain all possible labels) whereas others are always guaranteed to be incorrect (because they contain no class labels); since the overall error rate is asymptotically , this leads to the more interesting prediction regions (containing, e.g., only a single class label) potentially having an error rate that is not immediately related to .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…2), the error rate is plotted versus the significance and the model is considered valid (well-calibrated) if the result is a straight diagonal line, i.e., we obtain the error-rate we ask for when making predictions. Deviations from the expected error rates can mainly be attributed to either: lack of exchangeability for the set of predicted compounds, or statistical fluctuations due to small sets of predicted compounds 14,15 since conformal prediction will provide valid predictions "over time", i.e., given enough predictions (law-of-largenumbers). 12…”
Section: Validitymentioning
confidence: 99%
“…To computeˆ , we follow the approach of [16]. Given a significance level , let P (e), P (s) and P (d) be respectively the fraction of empty, single and double prediction regions observed on a test set with K examples (P (e) + P (s) + P (d) = 1).…”
Section: Acceptance Criterionmentioning
confidence: 99%