2009
DOI: 10.1016/j.csda.2009.04.009
|View full text |Cite
|
Sign up to set email alerts
|

Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
362
0
17

Year Published

2010
2010
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 655 publications
(382 citation statements)
references
References 12 publications
3
362
0
17
Order By: Relevance
“…In order to avoid performance bias due to an arbitrary split of the labeled data, we resplit the labeled data into the training and the test sets for a total of 30 splits and averaged the models performance. This repeated cross-valdiation approach [46] has been used in similar studies to estimate the sensivity and average accuracy of the model using different training and test data [15,17,24,26,29]. We used a three-fold design where the labeled dataset was split into three groups of crowns where two groups were used to train the model and one group was reserved to test the model.…”
Section: Assessment Of Svm Classifiermentioning
confidence: 99%
“…In order to avoid performance bias due to an arbitrary split of the labeled data, we resplit the labeled data into the training and the test sets for a total of 30 splits and averaged the models performance. This repeated cross-valdiation approach [46] has been used in similar studies to estimate the sensivity and average accuracy of the model using different training and test data [15,17,24,26,29]. We used a three-fold design where the labeled dataset was split into three groups of crowns where two groups were used to train the model and one group was reserved to test the model.…”
Section: Assessment Of Svm Classifiermentioning
confidence: 99%
“…This can be done by using a repeated cross-validation algorithm, which, in comparison with other estimators such as repeated hold-out and bootstrap, was shown to be an adequate estimator of accuracy (Kohavi 1995) or related classification error rates (Kim 2009). The penalized accuracy estimations presented in the following sections correspond to the cross-validated estimates, derived using repeated tenfold cross-validation.…”
Section: Absolute Goodness-of-fit: Penalized Accuracy and Cross-validmentioning
confidence: 99%
“…To obtain reliable estimates of the prediction error, quite extensive resampling is required in both the cross-validation and bootstrap procedures (see e.g. Kim, 2009). However, in large scale problems, recalculating a robust fit a large number of times becomes very time consuming.…”
Section: Introductionmentioning
confidence: 99%