2013
DOI: 10.1016/j.swevo.2013.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of stacking ensemble configurations through Artificial Bee Colony algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(37 citation statements)
references
References 39 publications
0
37
0
Order By: Relevance
“…So, if the quadratic model is selected for both dataset, it can perform better than the other model. This model can be stated by following equations, for Glass and Iris, respectively: 6 10 6139 9 10 5747 1 10 2881 2 10 6648 8 10 4694 4 10 6912 3 10 7455 9 10 3417 1 10 5905 2 10 3127 1 3333 0 0434 0 0923 0 3584 0 0274 0 0977 0 0817 0 0508 0 0842 0 4285 0 0291 0 1258 0 1088 0 According to above equations, for Glass, scale 2 , shift 1 and the square of scale 2 are respectively more important for ensemble size, error rate and diversity, because their related β is larger. For the same reason, shift 2 , c 2 and shift 2 have greater impact for Iris in ensemble size, error and diversity, respectively.…”
Section: Dataset Algorithm Ensemble Size Error Rate Q Statisticmentioning
confidence: 99%
See 1 more Smart Citation
“…So, if the quadratic model is selected for both dataset, it can perform better than the other model. This model can be stated by following equations, for Glass and Iris, respectively: 6 10 6139 9 10 5747 1 10 2881 2 10 6648 8 10 4694 4 10 6912 3 10 7455 9 10 3417 1 10 5905 2 10 3127 1 3333 0 0434 0 0923 0 3584 0 0274 0 0977 0 0817 0 0508 0 0842 0 4285 0 0291 0 1258 0 1088 0 According to above equations, for Glass, scale 2 , shift 1 and the square of scale 2 are respectively more important for ensemble size, error rate and diversity, because their related β is larger. For the same reason, shift 2 , c 2 and shift 2 have greater impact for Iris in ensemble size, error and diversity, respectively.…”
Section: Dataset Algorithm Ensemble Size Error Rate Q Statisticmentioning
confidence: 99%
“…In other words, an ensemble classifier combines a finite number of classifiers of the same type or different, trained concurrently for a joint classification problem. The ensemble efficiently amends the performance of the classifier compared to a single classifier [2].…”
Section: Introductionmentioning
confidence: 99%
“…Since ABC has few control parameters and also easy to implement, it has been widely used in many optimization applications, such as dynamic anomaly detection in MANETs [15], stacking ensemble [16], classification rule discovery [50], and so on.…”
Section: Artificial Bee Colonymentioning
confidence: 99%
“…It is as simple as particle swarm optimization (PSO) [13] and differential evolution (DE) [14], and uses only common control parameters, such as population size and maximum cycle number. ABC has shown promising results in the field of optimization [15,16].…”
Section: Introductionmentioning
confidence: 99%
“…There are also some new papers which propose extensions to SR (Džeroski and Ženko (2004), Rooney et al (2004a), Rooney et al (2004b), Xu et al (2007), Ozay and Vural (2008), Ni et al (2009)), Ledezma et al (2010), Shunmugapriya and Kanmani (2013). An informative overview of SR methods can be found in Sesmero et al (2015).…”
Section: Introductionmentioning
confidence: 99%