2015
DOI: 10.1002/widm.1143
|View full text |Cite
|
Sign up to set email alerts
|

Generating ensembles of heterogeneous classifiers using Stacked Generalization

Abstract: Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
94
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 128 publications
(95 citation statements)
references
References 60 publications
0
94
0
1
Order By: Relevance
“…The methods should be also significantly diversified in order for the ensemble method to yield better results (Kuncheva and Whitaker (2003)). Noteworthy is also Table 2 in Sesmero et al (2015), where one can find information about base classifiers used in SR. The methods we selected are commonly used and meet the criteria of fast implementation and efficiency.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The methods should be also significantly diversified in order for the ensemble method to yield better results (Kuncheva and Whitaker (2003)). Noteworthy is also Table 2 in Sesmero et al (2015), where one can find information about base classifiers used in SR. The methods we selected are commonly used and meet the criteria of fast implementation and efficiency.…”
Section: Methodsmentioning
confidence: 99%
“…Although SR is applied to real-world problems less frequently than other ensemble methods, such as bagging or boosting, the exponential growth of data as well as the diversity of these data continue to make SR an interesting alternative (Sesmero et al (2015)). There are also some new papers which propose extensions to SR (Džeroski and Ženko (2004), Rooney et al (2004a), Rooney et al (2004b), Xu et al (2007), Ozay and Vural (2008), Ni et al (2009)), Ledezma et al (2010), Shunmugapriya and Kanmani (2013).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, the base classifiers are called the level-0 classifiers, and the combiner is called meta-classifier [6,16]. One of the issue in stacking is obtaining the suitable base classifiers and the meta-classifier, especially in relation to each specific dataset [16].…”
Section: Stackingmentioning
confidence: 99%
“…1 shows the structure of stacking ensemble. To predict a new instance, the level-0 classifiers produce a vector of prediction that is the input to the meta-classifier, which in turn predict the class [16]. In this study, we use multi-response linear regression (MLR) as a meta-classifier [6,17].…”
Section: Stackingmentioning
confidence: 99%