2009
DOI: 10.1109/tevc.2009.2019829
|View full text |Cite
|
Sign up to set email alerts
|

Facetwise Analysis of XCS for Problems With Class Imbalances

Abstract: Michigan-style learning classifier systems (LCSs) are online machine learning techniques that incrementally evolve distributed subsolutions which individually solve a portion of the problem space. As in many machine learning systems, extracting accurate models from problems with class imbalances-that is, problems in which one of the classes is poorly represented with respect to the other classes-has been identified as a key challenge to LCSs. Empirical studies have shown that Michigan-style LCSs fail to provid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(36 citation statements)
references
References 32 publications
0
36
0
Order By: Relevance
“…This situation occurs when the concepts are represented within small clusters, which arise as a direct result of underrepresented subconcepts [11,40]. Although those small disjuncts are implicit in most of the problems, the existence of these small disjuncts highly increases the complexity of the problem in the case of imbalance because it becomes hard to know whether these examples represent an actual subconcept or are merely attributed to noise [41].…”
Section: Small Sample Size and Small Disjunctsmentioning
confidence: 99%
“…This situation occurs when the concepts are represented within small clusters, which arise as a direct result of underrepresented subconcepts [11,40]. Although those small disjuncts are implicit in most of the problems, the existence of these small disjuncts highly increases the complexity of the problem in the case of imbalance because it becomes hard to know whether these examples represent an actual subconcept or are merely attributed to noise [41].…”
Section: Small Sample Size and Small Disjunctsmentioning
confidence: 99%
“…The number of movement iterations is set to 500,000-however, as mentioned above active learning does not occur at every iteration, but only when the approximation is inaccurate. The threshold h GA specifies how frequently the GA is activated and is increased to 200 in order to compensate for the imbalanced sampling as suggested in [19]. Towards the end of a run, condensation [32] is activated, that is, reproduction without mutation and crossover to remove evolutionary overhead.…”
Section: Experimental Validationmentioning
confidence: 99%
“…Additionally, the optimal value of the learning rate depends on the generality and accuracy of the classifier being learned. The parameter β should be decreased for overgeneral rules, in which large fluctuations of the prediction p, prediction error and fitness f may occur (Butz et al, 2005;Orriols-Puig et al, 2009).…”
Section: Troć and O Unoldmentioning
confidence: 99%
“…Nevertheless, some parameters have direct influence on classifier evaluation and cannot be simply self-adapted. For example, the learning rate controls updates of classifier parameters (among others, the fitness updates) and an incorrect value of β makes inaccurate classifiers over-fitted (Orriols-Puig et al, 2009). In (Hurst and Bull, 2003) it was shown that the adaptation of β at the individual level is "selfish" indeed and therefore the "enforced cooperation" method, which is dedicated to systems solving multi-step problems, was proposed.…”
Section: Troć and O Unoldmentioning
confidence: 99%