Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001
DOI: 10.1109/cvpr.2001.991030
|View full text |Cite
|
Sign up to set email alerts
|

Bagging is a small-data-set phenomenon

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
4
0

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 7 publications
1
4
0
Order By: Relevance
“…Results obtained here seem to support the position that bagging results depend simply on obtaining a diverse set of classifiers Breiman [1996], Chawla et al [2001], Dietterich [2000], Domingos [1996]. Building classifiers on disjoint partitions of the data provides a set of classifiers that meet this requirement.…”
Section: Conclusion and Discussionsupporting
confidence: 50%
“…Results obtained here seem to support the position that bagging results depend simply on obtaining a diverse set of classifiers Breiman [1996], Chawla et al [2001], Dietterich [2000], Domingos [1996]. Building classifiers on disjoint partitions of the data provides a set of classifiers that meet this requirement.…”
Section: Conclusion and Discussionsupporting
confidence: 50%
“…With regard to the former issue, currently the main explanation of bagging operation is given in terms of its capability to reduce the variance component of the misclassification probability, which was related by Breiman [3] to the degree of "instability" of the base classifier, informally defined as the tendency of undergoing large changes in its decision function as a result of small changes in the training set: the more unstable a classifier, the higher the variance component of its misclassification probability and thus the improvement attained by bagging. For classification problems, this DRAFT explanation is supported by empirical evidence [2], [6], [16], [20], according to several biasvariance decompositions proposed so far, although alternative explanations have been proposed as well (for instance [5], [7], [10]), and some works showed that bagging can also reduce bias [2], [20]. With regard to the latter issue above, it is well known that bagging miclassification rate tends to an asymptotic value as the ensemble size increases.…”
Section: Introductionmentioning
confidence: 77%
“…We point out however that this last conclusion can not be formally derived from the results of Sect. II-B, since the bias-variance decomposition of bagging error given by ( 8) is related to the error of a single classifier trained not on the original training set (1), but on a bootstrap replicate of it (5). For the same reason, the results of Sect.…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation
“…More or less diversity in an ensemble is achieved depending on the level of dissimilarity between the training sets of the base classifiers composing the ensemble. However, bootstrapped training sets become more and more similar as redundancy is increasing (Chawla et al, ). Redundancy not only slows down the training task but it can significantly decrease diversity in bagging thus degrading the performance of bagging, affecting the rarer and harder classes.…”
Section: Introductionmentioning
confidence: 99%