2008
DOI: 10.1007/978-3-540-88138-4_6
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting UCS: Description, Fitness Sharing, and Comparison with XCS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0
2

Year Published

2009
2009
2018
2018

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(48 citation statements)
references
References 14 publications
0
46
0
2
Order By: Relevance
“…The first one, primarily aimed at studying performance through predictive accuracy, involves training SS-LCS and UCS with and without clustering-based initialization (SS-LCS CI / SS-LCS NI and UCS CI /UCS NI , respectively), as well as their rival algorithms in a variety of real-world classification problems. The performance metric used throughout this set of experiments for algorithm comparisons is the average accuracy rate of 5 tenfold stratified cross validation runs, in line with other comparative studies in the literature (Orriols-Puig et al 2008b;García et al 2009). …”
Section: Experimental Methodologymentioning
confidence: 97%
See 2 more Smart Citations
“…The first one, primarily aimed at studying performance through predictive accuracy, involves training SS-LCS and UCS with and without clustering-based initialization (SS-LCS CI / SS-LCS NI and UCS CI /UCS NI , respectively), as well as their rival algorithms in a variety of real-world classification problems. The performance metric used throughout this set of experiments for algorithm comparisons is the average accuracy rate of 5 tenfold stratified cross validation runs, in line with other comparative studies in the literature (Orriols-Puig et al 2008b;García et al 2009). …”
Section: Experimental Methodologymentioning
confidence: 97%
“…Methods that belong to the second approach and may provide an effective and computationally feasible alternative, in line with our initial requirement for high interpretability, include (a) classifiers inducing sets of rules, such as FOIL (Quinlan 1996), PART (Frank and Witten 1998), HIDER (Aguilar-Ruiz et al 2003, and SIA (Venturini 1993); (b) learning classifier systems, such as XCS (Wilson 1995), UCS (Bernadó-Mansilla and GarrellGuiu 2003;Orriols-Puig and Bernadó-Mansilla 2008b), GAssist (Bacardit 2004) and ILGA (Guan and Zhu 2005); and (c) algorithms inducing decision trees, such as C4.5 (Quinlan 1993).…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…The reason is that it evolves only those highly-rewarded classifiers of the match set in the correct set, which predict the same class as that of the training example [33]. In comparison, GAssist has serious problems in dealing with multi-class problems -specially when the number of output classes are more than 5.…”
Section: Effect Of Nature Of Datasetmentioning
confidence: 99%
“…The reason is that it evolves only those highly-rewarded classifiers of the match set in the correct set, which predict the same class as that of the training example [24]. In comparison, GAssist has serious problems in dealing with multi-class problems -specially when the when the number of output classes are more than 5.…”
Section: Role Of Multiple Classesmentioning
confidence: 99%