2014
DOI: 10.1186/s13637-014-0015-0
|View full text |Cite
|
Sign up to set email alerts
|

Unbiased bootstrap error estimation for linear discriminant analysis

Abstract: Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 52 publications
0
6
0
Order By: Relevance
“…These results were expected, since these curves only represent the training and test sets, respectively. The 0.632 and 0.632+ bootstrap curves had an overall similar performance, with marginally lower values for latter, since the 0.632+ estimator's performance depends on the amount of overfitting, whereas the former has a constant weight [1,14,15]. Hence, the 0.632+ estimator provided the best results with the least variance (similar results within the same condition -fixed or variable lambda) and bias (no overor underestimation) and it should thus be used in future analysis.…”
Section: Discussionmentioning
confidence: 98%
See 4 more Smart Citations
“…These results were expected, since these curves only represent the training and test sets, respectively. The 0.632 and 0.632+ bootstrap curves had an overall similar performance, with marginally lower values for latter, since the 0.632+ estimator's performance depends on the amount of overfitting, whereas the former has a constant weight [1,14,15]. Hence, the 0.632+ estimator provided the best results with the least variance (similar results within the same condition -fixed or variable lambda) and bias (no overor underestimation) and it should thus be used in future analysis.…”
Section: Discussionmentioning
confidence: 98%
“…Finding a method for validating predictive models and obtaining an unbiased performance has been a target of discussion by multiple authors [11,16]. Although there are several approaches to estimate the error rate of a prediction rule, such as the jackknife (leave-one-out) method and cross-validation, the bootstrap method has been considered the most efficient throughout the years, as it is capable of directly assessing the variability, returns higher accuracy and it is able to calculate the variance of a point estimate of prediction error [1,14,15].…”
Section: The Bootstrap Methodsmentioning
confidence: 99%
See 3 more Smart Citations