2017
DOI: 10.1038/srep44649
|View full text |Cite
|
Sign up to set email alerts
|

Generating highly accurate prediction hypotheses through collaborative ensemble learning

Abstract: Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 55 publications
1
9
0
Order By: Relevance
“…The results obtained with the Bagging algorithm showed that in case of few outliers, these outliersin question are not very effective on the result. Similar results are drawing attentionwhen the data is scaled.Many studies reported similar results to these findings [32], [28], [33].Because the methods using Boosting algorithms show weak classification performance on adjusted variables according to variables that bears outliers and covariates.In contrast, the Bagging algorithm shows poor classification performance due to deviation from the sample [32].…”
Section: Discussionsupporting
confidence: 61%
“…The results obtained with the Bagging algorithm showed that in case of few outliers, these outliersin question are not very effective on the result. Similar results are drawing attentionwhen the data is scaled.Many studies reported similar results to these findings [32], [28], [33].Because the methods using Boosting algorithms show weak classification performance on adjusted variables according to variables that bears outliers and covariates.In contrast, the Bagging algorithm shows poor classification performance due to deviation from the sample [32].…”
Section: Discussionsupporting
confidence: 61%
“…, and if LD (θ, λ) is ρ-Lipschitzian with respect to its first argument, we can then rewrite Equation (5)…”
Section: Hypothesis and Pointwise Hypothesis Stability Of Logistic Re...mentioning
confidence: 99%
“…As the previous paragraph states, hypothesis stability is the weakest of the three notions. Hypothesis stability has been derived for the k-nearest neighbors algorithm (k-NN) [3], for linear regression and regularized logistic regression in a technical report, for the AdaBoost algorithm [4], and for the Gentle Boost algorithm [5] as a constituent of a collaborative ensemble scheme [5,6]. L2 regularization with a penalty λ ∈ R + stabilizes learning algorithms because their objective functions then become λ-strongly convex, which leads to meeting the stability criterion.…”
Section: Introductionmentioning
confidence: 99%
“…An important virtue of stability-based bounds is their simplicity from a mathematical standpoint. Originally not as tight as the PAC-Bayes bounds [19], considered the tightest, they can be optimized or consolidated, i.e., be made significantly tighter in different ways, not the least of which is collaborative learning [25,26] that has attained significant generalization improvement.…”
Section: Related Work: Stability Of Learning Algorithmsmentioning
confidence: 99%

Stacking and stability

Arsov,
Pavlovski,
Kocarev
2019
Preprint
Self Cite