2011 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE) 2011
DOI: 10.1109/cidue.2011.5948488
|View full text |Cite
|
Sign up to set email alerts
|

Theoretical and empirical analysis of diversity in non-stationary learning

Abstract: Abstract-In non-stationary learning, we require a predictive model to learn over time, adapting to changes in the concept if necessary. A major concern in any algorithm for non-stationary learning is its rate of adaptation to new concepts. When tackling such problems with ensembles, the concept of diversity appears to be of significance. In this paper, we discuss how we expect diversity to impact the rate of adaptation in non-stationary ensemble learning. We then analyse the relation between voting margins and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2014
2014
2016
2016

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Conversely, weightless frameworks emphasize plasticity and tend to drop weaker ensemble members immediately (e.g., [81,167]); • Classifier weight adaption versus data instance based weight adaptation: The weighing of votes from an ensemble is generally a function of either classifier performance [54,65,152] or of the data from which a member of the ensemble was constructed (e.g., [20,150]); • Identification of ensemble member for replacement: Various heuristics have been proposed for targeting the ensemble member for replacement when performance as a whole is deemed to be poor, e.g., replace the oldest [167] or member with least 'contribution' [107,171]. • Role of diversity on ensembles: Within an environment undergoing change, diversity provides faster reaction times to a change, but does not necessarily facilitate fast convergence to the new concept [137,165]. One implication of this might be that the amount of diversity/plasticity needs to in some way 'match' the amount of concept drift/shift in the stream.…”
Section: Ensemble ML Perspectivementioning
confidence: 99%
See 1 more Smart Citation
“…Conversely, weightless frameworks emphasize plasticity and tend to drop weaker ensemble members immediately (e.g., [81,167]); • Classifier weight adaption versus data instance based weight adaptation: The weighing of votes from an ensemble is generally a function of either classifier performance [54,65,152] or of the data from which a member of the ensemble was constructed (e.g., [20,150]); • Identification of ensemble member for replacement: Various heuristics have been proposed for targeting the ensemble member for replacement when performance as a whole is deemed to be poor, e.g., replace the oldest [167] or member with least 'contribution' [107,171]. • Role of diversity on ensembles: Within an environment undergoing change, diversity provides faster reaction times to a change, but does not necessarily facilitate fast convergence to the new concept [137,165]. One implication of this might be that the amount of diversity/plasticity needs to in some way 'match' the amount of concept drift/shift in the stream.…”
Section: Ensemble ML Perspectivementioning
confidence: 99%
“…The diversity term can be characterized as being 'good' or 'bad' [26], a result that has a corresponding observation in EC [187]. Constructing new ensemble member Diversity of base learner [65,102] Sample stream data using Boosting versus Bagging [147,150] Identify ensemble member for replacement Age based heuristics [167] Performance based heuristics [107,171] Class imbalance Effect of sampling biases [34,55,86,186] Drift management Incremental updating of current models [107,158] Adapt voting weights [2,90,152] Shift management Outright replacement of one or more ensemble member [68,81,167] Diversity management Impact on capacity for change [26,137,165] Genet Program Evolvable Mach Under non-stationary data, it has been established that reducing the absolute value for the ensemble margin produces an equivalent increase in diversity [165]. A second open question is in regard to the method assumed for combining the outcome from multiple models under a non-stationary task.…”
Section: Ensemble ML Perspectivementioning
confidence: 99%
“…Diversity and voting margins are two important aspects of ensemble methods that are theoretically connected. The voting margin can be defined as the confidenceweighted correctness of the prediction on a given example (Richard Stapenhurst, 2011). Voting margins are useful in determining generalization error.…”
Section: Background On Ensemble Diversity and Voting Marginsmentioning
confidence: 99%