2018
DOI: 10.1016/j.inffus.2017.12.001
|View full text |Cite
|
Sign up to set email alerts
|

Review of ensembles of multi-label classifiers: Models, experimental study and prospects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
92
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 133 publications
(93 citation statements)
references
References 34 publications
0
92
0
1
Order By: Relevance
“…In the literature, besides the practical applications of ensemble methods in many areas, research on ensemble methods can be divided into three aspects: information for the trained classifiers of a given learning task can be inferred so as to obtain the optimal combining weights of the trained classifiers. Moreover, several ensemble systems were developed for different learning paradigms such as incremental learning [30][31][32], semisupervised learning [33], and multi-label learning [34,35]. For instance, Pham et al [31] combined random projections and Hoeffding tree to construct an incremental online ensemble learning system.…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…In the literature, besides the practical applications of ensemble methods in many areas, research on ensemble methods can be divided into three aspects: information for the trained classifiers of a given learning task can be inferred so as to obtain the optimal combining weights of the trained classifiers. Moreover, several ensemble systems were developed for different learning paradigms such as incremental learning [30][31][32], semisupervised learning [33], and multi-label learning [34,35]. For instance, Pham et al [31] combined random projections and Hoeffding tree to construct an incremental online ensemble learning system.…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…-ML. The most relevant algorithm adaptations [118] are based on standard classification algorithms with added support for choosing more than one class at a time: adaptations exist for k-NN [117], decision trees [24], SVMs [36], association rules [106] and ensembles [82].…”
Section: Algorithm Adaptationmentioning
confidence: 99%
“…Techniques for reducing MLC to SLC problems involve the choice of a base learner for solving the latter. Somewhat surprisingly, this choice is often neglected, despite having an important influence on generalization performance [10][11][12]15]. Even in more extensive studies [10,12], a base learner is fixed a priori in a more or less arbitrary way.…”
Section: Introductionmentioning
confidence: 99%
“…Somewhat surprisingly, this choice is often neglected, despite having an important influence on generalization performance [10][11][12]15]. Even in more extensive studies [10,12], a base learner is fixed a priori in a more or less arbitrary way. Broader studies considering multiple base learners, such as [6,22], are relatively rare and rather limited in terms of the number of base learners considered.…”
Section: Introductionmentioning
confidence: 99%