2011
DOI: 10.1198/jasa.2010.tm10053
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Confidence Intervals for the Test Error in Classification

Abstract: The estimated test error of a learned classifier is the most commonly reported measure of classifier performance. However, constructing a high quality point estimator of the test error has proved to be very difficult. Furthermore, common interval estimators (e.g. confidence intervals) are based on the point estimator of the test error and thus inherit all the difficulties associated with the point estimation problem. As a result, these confidence intervals do not reliably deliver nominal coverage. In contrast … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
52
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 54 publications
(53 citation statements)
references
References 16 publications
1
52
0
Order By: Relevance
“…Due to high heterogeneities among individuals, there may be large variations in the estimated treatment rules across different training sets. Laber & Murphy (2011) construct an adaptive confidence interval for the test error under the non-regular framework. Confidence intervals for value functions help us determine whether essential differences exist among different decision rules.…”
Section: Discussionmentioning
confidence: 99%
“…Due to high heterogeneities among individuals, there may be large variations in the estimated treatment rules across different training sets. Laber & Murphy (2011) construct an adaptive confidence interval for the test error under the non-regular framework. Confidence intervals for value functions help us determine whether essential differences exist among different decision rules.…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately even in an unweighted classification problem, constructing a CI for the test error is di cult due to the inherent non-smoothness; standard methods like normal approximation or usual bootstrap fail. Laber and Murphy [83] developed a method for constructing such CIs using smooth data-dependent upper and lower bounds on the test error; this method is similar to the ACI method described in Section 4.2. While intuitively one can expect that this method could be successfully adapted for the Value of an estimated DTR, more targeted research is needed to extend and fine-tune the procedure to the current setting.…”
Section: Confidence Intervals For the Value Of An Estimated Dtrmentioning
confidence: 99%
“…Laber and Murphy (2011) use an interesting non-regular framework to examine the behavior of the test error for a wide class of linear classifiers. The non-regular framework highlights why existing confidence intervals do not work well for small-sized samples.…”
Section: Introductionmentioning
confidence: 99%
“…Using the non-regular framework, the authors show that the scaled and centered test error can be decomposed into two components, corresponding to points that are on the optimal decision boundary and points that are not (see Laber and Murphy 2011, Eq. 6).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation