2023
DOI: 10.1007/s13042-022-01766-6
|View full text |Cite
|
Sign up to set email alerts
|

Improving boosting methods with a stable loss function handling outliers

Abstract: In classification problems, the occurrence of abnormal observations is often encountered. How to obtain a stable model to deal with outliers has always been a subject of widespread concern. In this article, we draw on the ideas of the AdaBoosting algorithm and propose a asymptotically linear loss function, which makes the output function more stable for contaminated samples, and two boosting algorithms were designed, based on two different way of updating, to handle outliers. In addition, a skill for overcomin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…These samples have a higher chance of being learned by the subsequent classifiers, and they are forced to optimize the margin distribution on these samples. However, when the noise-detection assumption does not hold, the algorithm often performs unsatisfactorily [14]. Boosting strategy requires training multiple classifiers in a serial manner, which can be a time-consuming procedure.…”
Section: Introductionmentioning
confidence: 99%
“…These samples have a higher chance of being learned by the subsequent classifiers, and they are forced to optimize the margin distribution on these samples. However, when the noise-detection assumption does not hold, the algorithm often performs unsatisfactorily [14]. Boosting strategy requires training multiple classifiers in a serial manner, which can be a time-consuming procedure.…”
Section: Introductionmentioning
confidence: 99%