2009
DOI: 10.1155/2009/158368
|View full text |Cite
|
Sign up to set email alerts
|

Is Bagging Effective in the Classification of Small-Sample Genomic and Proteomic Data?

Abstract: There has been considerable interest recently in the application of bagging in the classification of both gene-expression data and protein-abundance mass spectrometry data. The approach is often justified by the improvement it produces on the performance of unstable, overfitting classification rules under small-sample situations. However, the question of real practical interest is whether the ensemble scheme will improve performance of those classifiers sufficiently to beat the performance of single stable, no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…It is important to select an odd m to avoid the issue of tie breaking in the majority vote. Experimental results in our previous study [17] showed that increasing m beyond m = 51 leads to negligible differences in performance.…”
Section: Bagged Classification Rulesmentioning
confidence: 93%
See 2 more Smart Citations
“…It is important to select an odd m to avoid the issue of tie breaking in the majority vote. Experimental results in our previous study [17] showed that increasing m beyond m = 51 leads to negligible differences in performance.…”
Section: Bagged Classification Rulesmentioning
confidence: 93%
“…, m, for large enough m. How large m should be is an important topic of bagging so that it is computationally efficient and the Monte Carlo approximation is accurate enough. In this paper, we chose m = 51 according to the recommendation from Breiman [21] and from our observations on the convergence of mean error of bagged classifiers in our previous study [17]. It is important to select an odd m to avoid the issue of tie breaking in the majority vote.…”
Section: Bagged Classification Rulesmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we apply this scheme to the LDA classification rule defined previously. Notice the distinction between a bootstrap LDA classifier and a ‘bagged’ (bootstrap-aggregated) LDA classifier [ 47 ],[ 48 ]; these correspond to distinct classification rules. The bootstrap LDA classifier is employed here as an auxiliary tool to analyze the problem of unbiased bootstrap error estimation for the plain LDA classifier.…”
Section: Bootstrap Classificationmentioning
confidence: 99%
“…Breiman found that gains in accuracy could be obtained by bagging when the base learner is not stable [ 6 ]. However, Vu and Braga-Neto argued that the use of bagging in classification of small-sample data increases computational cost, but is not likely to improve overall classification accuracy over other simpler classification rules [ 10 ]. Moreover, if the sample size is small, the gains achieved via a bagged ensemble may not compensate for the decrease in accuracy of individual models [ 11 ].…”
Section: Introductionmentioning
confidence: 99%