2018
DOI: 10.1111/coin.12198
|View full text |Cite
|
Sign up to set email alerts
|

Bagged ensembles with tunable parameters

Abstract: Ensemble learning is a popular classification method where many individual simple learners contribute to a final prediction. Constructing an ensemble of learners has been shown to often improve prediction accuracy over a single learner. Bagging and boosting are the most common ensemble methods, each with distinct advantages. While boosting methods are typically very tunable with numerous parameters, to date, the type of flexibility this allows has been missing for general bagging ensembles. In this paper, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(24 citation statements)
references
References 38 publications
0
24
0
Order By: Relevance
“…Studies show that a single machine learning model can be outperformed by a “committee” of individual models, which is called a machine learning ensemble ( Zhang and Ma, 2012 ). Ensemble learning is proved to be effective as it can reduce bias, variance, or both and is able to better capture the underlying distribution of the data in order to make better predictions, if the base learners are diverse enough ( Dietterich, 2000 ; Pham and Olafsson, 2019a ; Pham and Olafsson, 2019b ; Shahhosseini et al., 2019a ; Shahhosseini et al., 2019b ). The usage of ensemble learning in ecological problems is becoming more widespread; for instance, bagging and specifically random forest ( Vincenzi et al., 2011 ; Mutanga et al., 2012 ; Fukuda et al., 2013 ; Jeong et al., 2016 ), boosting ( De'ath, 2007 ; Heremans et al., 2015 ; Belayneh et al., 2016 ; Stas et al., 2016 ; Sajedi-Hosseini et al., 2018 ), and stacking ( Conţiu and Groza, 2016 ; Cai et al., 2017 ; Shahhosseini et al., 2019a ), are some of the ensemble learning applications in agriculture.…”
Section: Introductionmentioning
confidence: 99%
“…Studies show that a single machine learning model can be outperformed by a “committee” of individual models, which is called a machine learning ensemble ( Zhang and Ma, 2012 ). Ensemble learning is proved to be effective as it can reduce bias, variance, or both and is able to better capture the underlying distribution of the data in order to make better predictions, if the base learners are diverse enough ( Dietterich, 2000 ; Pham and Olafsson, 2019a ; Pham and Olafsson, 2019b ; Shahhosseini et al., 2019a ; Shahhosseini et al., 2019b ). The usage of ensemble learning in ecological problems is becoming more widespread; for instance, bagging and specifically random forest ( Vincenzi et al., 2011 ; Mutanga et al., 2012 ; Fukuda et al., 2013 ; Jeong et al., 2016 ), boosting ( De'ath, 2007 ; Heremans et al., 2015 ; Belayneh et al., 2016 ; Stas et al., 2016 ; Sajedi-Hosseini et al., 2018 ), and stacking ( Conţiu and Groza, 2016 ; Cai et al., 2017 ; Shahhosseini et al., 2019a ), are some of the ensemble learning applications in agriculture.…”
Section: Introductionmentioning
confidence: 99%
“…The techniques used can range from classical statistical techniques to Machine Learning (ML) algorithms. The latter have enjoyed wide applications in various ecological classification problems and predictive modeling (Rumpf et al 2010, Shekoofa et al 2014, Crane-Droesch 2018, Karimzadeh and Olafsson 2019, Pham and Olafsson 2019a, 2019b because of their adeptness to deal with nonlinear relationships, high-order interactions and non-normal data (De'ath and Fabricius 2000). Such methods include regularized regressions (Hoerl and Kennard 1970, Tibshirani 1996, Zou and Hastie 2005, tree-based models (Shekoofa et al 2014), Support Vector Machines (Basak et al 2007, Karimi et al 2008, Neural Networks (Liu et al 2001, Crane-Droesch 2018, Khaki and Khalilzadeh 2019, Khaki and Wang 2019 and others.…”
Section: Introductionmentioning
confidence: 99%
“…Whereas boosting models is used to reduce bias. Each strategy possesses their strength and weaknesses, and finding the optimal balance between the two remains a challenging problem [18], [19].…”
Section: Methodsmentioning
confidence: 99%