2008
DOI: 10.1007/s10710-008-9067-9
|View full text |Cite
|
Sign up to set email alerts
|

Coevolutionary bid-based genetic programming for problem decomposition in classification

Abstract: In this work a cooperative, bid-based, model for problem decomposition is proposed with application to discrete action domains such as classification. This represents a significant departure from models where each individual constructs a direct input-outcome map, for example, from the set of exemplars to the set of class labels as is typical under the classification domain. In contrast, the proposed model focuses on learning a bidding strategy based on the exemplar feature vectors; each individual is associate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…The first is the Michigan-style learning classifier system XCS [24] which is similar to the SBB approach in that it provides solutions in the form of a cooperating team of production rules evolved through a strengthbased bidding methodology. The specific XCS implementation used here, denoted XCSR, was augmented to handle real-valued inputs and has previously been evaluated on several classification problems [11]. The proposed approach was also compared against the SVM implementation LIBSVM (version 2.85) [4] on the basis that SVM models represent an established performance baseline 2 .…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The first is the Michigan-style learning classifier system XCS [24] which is similar to the SBB approach in that it provides solutions in the form of a cooperating team of production rules evolved through a strengthbased bidding methodology. The specific XCS implementation used here, denoted XCSR, was augmented to handle real-valued inputs and has previously been evaluated on several classification problems [11]. The proposed approach was also compared against the SVM implementation LIBSVM (version 2.85) [4] on the basis that SVM models represent an established performance baseline 2 .…”
Section: Discussionmentioning
confidence: 99%
“…With the exception of the population sizes and gaps, no fine tuning of parameters was performed. Some fine tuning of the XCSR parameters was performed and the details are available in [11]. Values for the SVM cost parameter C of 1, 10, and 100 were investigated under both the radial basis function (RBF) and Sigmoid Train 93 191 3488 ----3772 Test 73 177 3178 ----3428 CEN (41) Train 187141 12382 -----199523 Test 93576 6186 -----99762 SHU (9) Train 34108 37 132 6748 2458 6 11 43500 Test 11478 13 39 2155 809 4 2 14500 kernels.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, a variable length representation is assumed, where this Individuals from the learner population assume the structure of bid-based GP ( Fig. 2) [25]. Thus, each learner is defined in terms of the tuple hc; bi; where c is a scalar class label the range of which is defined by the task domain-or for a C class problem c 2 f0; .…”
Section: Representationmentioning
confidence: 99%
“…A popular modularization concept in LGP is the evolution of program teams (Brameier and Banzhaf, 2001). A team solution is formed by an uneven number of programs, of which every program has one vote.…”
mentioning
confidence: 99%