2016
DOI: 10.1007/s10994-016-5613-5
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label classification via multi-target regression on data streams

Abstract: Multi-label classification (MLC) tasks are encountered more and more frequently in machine learning applications. While MLC methods exist for the classical batch setting, only a few methods are available for streaming setting. In this paper, we propose a new methodology for MLC via multi-target regression in a streaming setting. Moreover, we develop a streaming multi-target regressor iSOUP-Tree that uses this approach. We experimentally compare two variants of the iSOUP-Tree method (building regression and mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 37 publications
(54 reference statements)
0
29
0
Order By: Relevance
“…The evaluation of ensembles are started after the first learner in the ensemble is formed. We used fixed number of 10 classifiers as the ensemble size, mimicking the previously conducted experiments to enable comparison [19,22]. For incremental evaluation of the classifiers, we used window-based evaluation with the window size {100, 250, 500, 1000} according to the size of datasets 2 .…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The evaluation of ensembles are started after the first learner in the ensemble is formed. We used fixed number of 10 classifiers as the ensemble size, mimicking the previously conducted experiments to enable comparison [19,22]. For incremental evaluation of the classifiers, we used window-based evaluation with the window size {100, 250, 500, 1000} according to the size of datasets 2 .…”
Section: Methodsmentioning
confidence: 99%
“…We have 7 baseline models. Four of the baselines use fixed-sized windows with no concept drift detecting mechanism: EBR [24], ECC [24], EPS [23], EBRT [19], whereas 3 of them use ADWIN as their concept drift detector: EaBR [22], EaCC [22], and EaPS [22]). In all models, BR and CC transformations use a Hoeffding Tree classifier whereas the PS transformation uses a Naive Bayes classifier.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations