2014
DOI: 10.1016/j.ins.2013.12.011
|View full text |Cite
|
Sign up to set email alerts
|

Combining block-based and online methods in learning ensembles from concept drifting data streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
83
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 159 publications
(84 citation statements)
references
References 16 publications
0
83
0
Order By: Relevance
“…3 For the real-world datasets we chose four data streams which are commonly used as benchmarks [8,25,34,43]. More precisely, we chose Airlines (Air) and Electricity (Elec) as examples of fairly balanced datasets, and KDDCup and PAKDD as examples of moderately imbalanced datasets.…”
Section: Datasetsmentioning
confidence: 99%
“…3 For the real-world datasets we chose four data streams which are commonly used as benchmarks [8,25,34,43]. More precisely, we chose Airlines (Air) and Electricity (Elec) as examples of fairly balanced datasets, and KDDCup and PAKDD as examples of moderately imbalanced datasets.…”
Section: Datasetsmentioning
confidence: 99%
“…The Online Accuracy Updated Ensemble (OAUE) [8] maintains a weighted set of component classifiers, such that the weighting is given by an adaptation to incremental learning of the weight function presented in [25]. Every d instances, the least accurate classifier is replaced by a candidate classifier, which has been trained only in the last d instances.…”
Section: Ensemble Methods For Data Stream Classificationmentioning
confidence: 99%
“…Ensemble classifiers achieve high accuracy through the combination of a diverse set of component classifiers, such that (ideally) incorrect predictions are obfuscated while correct predictions are highlighted. In a data stream scenario, ensemble classifiers also have the advantageous characteristic of being flexible, i.e., it is possible to replace (or remove) component classifiers based on drift detector algorithms [5,6] or other methods [8,12,13,18]. Even though there is not a solid proof of a strong correlation between accuracy and diversity [19,20], it is possible to verify that a set of homogeneous classifiers that always decide for the same labels cannot achieve better (or worse) results than one of them alone.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Kolter and Maloof proposed the dynamic weighted majority algorithm [15] and the additive expert ensemble algorithm [16] for online learning in succession. Brzezinski and Stefanowski [17] proposed the online accuracy updated ensemble (OAUE) algorithm, which uses the proposed function to incrementally train and weight component classifiers.…”
Section: Related Workmentioning
confidence: 99%