2008
DOI: 10.1007/978-0-387-87623-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Pareto Cooperative-Competitive Genetic Programming: A Classification Benchmarking Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
4

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…Whether projection into a space of pre-determined dimension is anyway optimal is open to question. McIntyre and Heywood [5,6] have presented a co-evolutionary approach to decompose the input space followed by an ensemble of GP classifiers; the ensemble nature of this solution, however, makes this approach computationally unattractive under recall. Both the approaches of McIntyre and Heywood, and of Kattan et al perform domain decomposition by clustering projected data points which relies on some measure of similarity or distance; selection of an appropriate metric is a fundamental and open question [7].…”
Section: Handling Multi-class Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Whether projection into a space of pre-determined dimension is anyway optimal is open to question. McIntyre and Heywood [5,6] have presented a co-evolutionary approach to decompose the input space followed by an ensemble of GP classifiers; the ensemble nature of this solution, however, makes this approach computationally unattractive under recall. Both the approaches of McIntyre and Heywood, and of Kattan et al perform domain decomposition by clustering projected data points which relies on some measure of similarity or distance; selection of an appropriate metric is a fundamental and open question [7].…”
Section: Handling Multi-class Datasetsmentioning
confidence: 99%
“…Tables 4,5,6,7,8 show the confusion matrices for most of the UCI datasets in Table 3, for the individuals with the best test error of the ten training repetitions. (We omit the confusion matrices for the WIN, TGD and IRIS datasets as the best performing individuals all achieved 100% accuracy.)…”
Section: Uci and Statlog Datasetsmentioning
confidence: 99%
“…One potential drawback of the approach is that the exemplar archives might grow considerably. However, enforcing a finite archive through the introduction of diversity measures (such as fitness sharing) has been empirically shown to be effective for efficiently decoupling fitness evaluation from training partition cardinality under batch (offline) learning [132,133]. The extension to sliding window interfaces and single pass constraints for online learning indicate that it is possible to match the performance of multi-pass batch algorithms when there is no labelling error [8].…”
Section: Samplingmentioning
confidence: 99%
“…In effect team members were learning to act everywhere, potentially negating insight into solutions. Recent approaches for addressing such a drawback represent the team as an archive of individuals independent from the population [31,32] or apply some form of ensemble selection post training [5,13]. Multi-population Teams are composed by sampling individuals from n different populations.…”
Section: Evolutionary Problem Decompositionmentioning
confidence: 99%
“…Naturally, maintaining sufficient diversity within the population becomes very important. One approach to this is to place a lot of emphasis on designing fitness functions that explicitly capture diversity as well as overall accuracy (of the team) e.g., negative correlation [5,13,28], strongly typed GP [21], local membership functions [31][32][33]; frequently under the context of multi-objective fitness formulations. One potential disadvantage of the single population model is that a recently introduced offspring can disrupt good quality team behaviour, as recognized in the 'profit sharing problem' of learning classifier systems [42].…”
Section: Evolutionary Problem Decompositionmentioning
confidence: 99%