2008
DOI: 10.1109/tnn.2007.902723
|View full text |Cite
|
Sign up to set email alerts
|

A Dynamically Adjusted Mixed Emphasis Method for Building Boosting Ensembles

Abstract: Progressively emphasizing samples that are difficult to classify correctly is the base for the recognized high performance of real Adaboost (RA) ensembles. The corresponding emphasis function can be written as a product of a factor that measures the quadratic error and a factor related to the proximity to the classification border; this fact opens the door to explore the potential advantages provided by using adjustable combined forms of these factors. In this paper, we introduce a principled procedure to sele… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0
2

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(23 citation statements)
references
References 39 publications
0
21
0
2
Order By: Relevance
“…Following the advice of [51], both parameters are selected by a CV process with 9 equally spaced values in the range ½0:1; 0:9, together with parameter r, which is explored in the same range that for the above-mentioned methods. The ensemble growth is stopped according to the approach proposed in [38], which selects T as the first value holding…”
Section: Machinesmentioning
confidence: 99%
See 1 more Smart Citation
“…Following the advice of [51], both parameters are selected by a CV process with 9 equally spaced values in the range ½0:1; 0:9, together with parameter r, which is explored in the same range that for the above-mentioned methods. The ensemble growth is stopped according to the approach proposed in [38], which selects T as the first value holding…”
Section: Machinesmentioning
confidence: 99%
“…It is true that overfitting can appear under some difficult conditions, such as a high level of noise or if many outliers are included in the training set [31][32][33]. However, there are several modified algorithms that reduce the corresponding negative effects [34][35][36][37][38][39][40][41].…”
Section: Introductionmentioning
confidence: 99%
“…Our experience says that results are very similar if the selection mechanism pays attention to both the difficulty of classifying each sample and its proximity to a reasonably established classification frontier, which are the essential aspects to evaluate the importance of training examples [29,30]. Consequently, we employed a simple two step procedure to design the highest performance machine presented in [13], GG-FWC 3, which used Gaussian kernels.…”
Section: A Powerful Post-aggregation Machine and Some Additional Appmentioning
confidence: 99%
“…The final designs are obtained by repeating the above training for the selected size, and using the learners' outputs average as the ensemble output. RAB ensembles are built as in [13], using MLP learners whose sizes M are determined by CV and applying a simple stopping criterion [29,30]. T is the resulting number of learners.…”
Section: Machine Designsmentioning
confidence: 99%
“…The first proposal along this line was [8], which was followed by many schemes that were proposed to emphasize sample populations according to their proximity to the borders [9][10] [11] or to the size of the corresponding errors [12] [13], or even the centroids of RBF machines according to the criteria of these types [14]. The original formulations of boosting ensembles [15][16] [17] minimize functionals of the margin cost; then, they seem to give importance to erroneous samples; however, [18] [19] prove that RealAdaBoost (and the margin cost) emphasizes both erroneous samples and those that are near the borders, and present some useful modifications of the balance between both weighting possibilities. In general, it is unclear which of these two types of samples is more important to get a good design, although the answer seems to be problem dependent [18] [19] [20].…”
Section: Introductionmentioning
confidence: 99%