2011 IEEE International Workshop on Machine Learning for Signal Processing 2011
DOI: 10.1109/mlsp.2011.6064599
|View full text |Cite
|
Sign up to set email alerts
|

Two Stage Classifier Chain Architecture for efficient pair-wise multi-label learning

Abstract: A common approach for solving multi-label learning problems using problem-transformation methods and dichotomizing classifiers is the pair-wise decomposition strategy. One of the problems with this approach is the need for querying a quadratic number of binary classifiers for making a prediction that can be quite time consuming, especially in learning problems with large number of labels. To tackle this problem we propose a Two Stage Classifier Chain Architecture (TSCCA) for efficient pair-wise multi-label lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…We compare the proposed ABC-based stacking algorithm with five state-of-the-art ensemble multilabel classification algorithms on all 10 benchmark datasets. The ten base level classifiers are CC [6], ECC [7], MLkNN [8], RAkEL [9], EPS [10], BRkNN [31], TSVA [32], TSCCA [33], IBLR [34], and CLR [35]. The five ensemble multilabel classification algorithms are RAkEL [9], ECC [6,7], EPS [10], RF-PCT [12], and ML-Forest [11].…”
Section: Methodsmentioning
confidence: 99%
“…We compare the proposed ABC-based stacking algorithm with five state-of-the-art ensemble multilabel classification algorithms on all 10 benchmark datasets. The ten base level classifiers are CC [6], ECC [7], MLkNN [8], RAkEL [9], EPS [10], BRkNN [31], TSVA [32], TSCCA [33], IBLR [34], and CLR [35]. The five ensemble multilabel classification algorithms are RAkEL [9], ECC [6,7], EPS [10], RF-PCT [12], and ML-Forest [11].…”
Section: Methodsmentioning
confidence: 99%
“…We evaluate our structured prediction learning framework in multi-label classification settings, as described in Section 5.2. We use the yeast (Elisseeff and Weston, 2001) dataset, which has 1500 training and 917 test instances with d = 103 attributes and |C|= 103 classes, and the scene (Gjorgjevikj and Madjarov, 2011) dataset, which has 1211 train and 1196 test instances with d = 294 attributes and |C|= 6 classes. For each input x, we construct the edge features R(x) (Equation (20)) by a dimensionality reduction of x.…”
Section: Multi-label Classification Datasetsmentioning
confidence: 99%
“…Typically, they rely on extending the attribute vectors by information about the classes to which the given example has already been shown to belong [3], [10], [12], [13], [14], [15], [16]. In our opinion, though, most of these papers fail to pay adequate attention to two critical issues: error-propagation, and unnecessary relationships.…”
Section: Introductionmentioning
confidence: 99%