19th IEEE International Conference on Tools With Artificial Intelligence(ICTAI 2007) 2007
DOI: 10.1109/ictai.2007.47
|View full text |Cite
|
Sign up to set email alerts
|

Conformal Prediction with Neural Networks

Abstract: Conformal Prediction (CP) is a method that can be used for complementing the bare predictions produced by any traditional machine learning algorithm with measures of confidence. CP gives good accuracy and confidence values, but unfortunately it is quite computationally inefficient. This computational inefficiency problem becomes huge when CP is coupled with a method that requires long training times, such as Neural Networks. In this paper we use a modification of the original CP method, called Inductive Confor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 57 publications
(63 citation statements)
references
References 8 publications
0
63
0
Order By: Relevance
“…The experimental results detailed in Section 5 and in (Papadopoulos et al, 2002a;Papadopoulos et al, 2002b;Papadopoulos et al, 2007) show that the accuracy of ICPs is comparable to that of traditional methods, while the confidence measures they produce are useful in practice. Of course, as a result of removing some examples from the training set to form the calibration set, they sometimes suffer a small, but usually negligible, loss of accuracy from their underlying algorithm.…”
Section: Resultsmentioning
confidence: 86%
See 2 more Smart Citations
“…The experimental results detailed in Section 5 and in (Papadopoulos et al, 2002a;Papadopoulos et al, 2002b;Papadopoulos et al, 2007) show that the accuracy of ICPs is comparable to that of traditional methods, while the confidence measures they produce are useful in practice. Of course, as a result of removing some examples from the training set to form the calibration set, they sometimes suffer a small, but usually negligible, loss of accuracy from their underlying algorithm.…”
Section: Resultsmentioning
confidence: 86%
“…Furthermore, the TCP uses a richer set of nonconformity scores, computed from all the training examples, when calculating the p-values for each possible classification, as opposed to the small part of training examples, the calibration set, the ICP uses for the same purpose. As the experimental results in Section 5 and in (Papadopoulos et al, 2002a;Papadopoulos et al, 2002b;Papadopoulos et al, 2007) show, this loss of accuracy is negligible while the improvement in computational efficiency is massive.…”
Section: Comparisonmentioning
confidence: 78%
See 1 more Smart Citation
“…The limitations of these theories in obtaining practical reliable values of confidence are detailed in [41], [48], [49], [50] and [1], and are summarized below.…”
Section: Limitationsmentioning
confidence: 99%
“…Moreover, as noted in several papers, ICP models typically suffer a small loss in predictive efficiency compared to corresponding TCP models due to the reduced number of training and calibration examples [1,[5][6][7][8]. However, as pointed out in [1], an unstable nonconformity function -one that is heavily influenced by an outlier example, i.e., an erroneously labled test instance (x k+1 ,ỹ) -can cause TCP models to become inefficient.…”
Section: Introductionmentioning
confidence: 99%