2014
DOI: 10.1007/s11063-014-9386-1
|View full text |Cite
|
Sign up to set email alerts
|

Neighborhood Guided Smoothed Emphasis for Real Adaboost Ensembles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…Moreover, among the compared algorithms, it seems ABC and CSB performs slightly better than the other two ones. A possible reason is that they use the formula of Adaboost algorithm [31,26,2,9], which performs well on detection problems.…”
Section: Comparison To Off-line Cost Sensitivity Classification Methodsmentioning
confidence: 99%
“…Moreover, among the compared algorithms, it seems ABC and CSB performs slightly better than the other two ones. A possible reason is that they use the formula of Adaboost algorithm [31,26,2,9], which performs well on detection problems.…”
Section: Comparison To Off-line Cost Sensitivity Classification Methodsmentioning
confidence: 99%
“…In other studies, penalty terms are used in the cost function to avoid focusing on outliers or on instances that are difficult to classify [45,28,51,52]. It is possible to also use hybrid weighting methods that modulate the emphasis on instances according to their distance to the decision boundary [27,26,2]. Most boosting algorithms use convex loss functions.…”
Section: Previous Workmentioning
confidence: 99%
“…To appreciate the differences of results for different emphasis methods, Table 3 shows the performance of ARA and standard RA (the latter are taken from [2], Table 2 ). It appears that the winner method is problem-dependent: ARA for Cre, Hep, Ion, and Rip, and RA for Dia, Ger, and Ima.…”
Section: Some Deeper Perspectivesmentioning
confidence: 99%
“…A second step is taken in [1,2], where the above mixed empha-sis is also applied to the K nearest neighbors of each sample, and the overall weight for each sample is a convex combination of the individual and the average neighbor emphasis. A version in which the combination of the error and the proximity terms is selected for each learner according to the minimization of the edge parameter, which is called DWK-RA (Dynamic Weighting K-neighbour Real Adaboost), provides excellent experimental results.…”
Section: Introductionmentioning
confidence: 99%