2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) 2006
DOI: 10.1109/his.2006.264925
|View full text |Cite
|
Sign up to set email alerts
|

Paired Comparisons Method for Solving Multi-Label Learning Problem

Abstract: Multi-label classification problem is a further generalization of traditional multi-class learning problem. In multi-label case the classes are not mutually exclusive and any sample may belong to several classes at the same time. Such problems occur in many important applications (in bioinformatics, text categorization, intrusion detection, etc.). In this paper we propose a new method for solving multi-label learning problem, based on paired comparisons approach. In this method each pair of possibly overlappi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2009
2009
2014
2014

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 5 publications
1
9
0
Order By: Relevance
“…First, we observe that LIBSVM+Platt significantly improves the performance of LIBSVM in all four settings. This is consistent with [25], where the conversion procedure makes the outputs from different SVM classifiers more comparable and consequently leads to better performance for multilabel ranking. On the other hand, both LIBSVM and LIBSVM+Platt are outperformed by the other two multi-label learning methods, indicating the importance of developing multi-label ranking methods for multi-label learning.…”
Section: Resultssupporting
confidence: 85%
“…First, we observe that LIBSVM+Platt significantly improves the performance of LIBSVM in all four settings. This is consistent with [25], where the conversion procedure makes the outputs from different SVM classifiers more comparable and consequently leads to better performance for multilabel ranking. On the other hand, both LIBSVM and LIBSVM+Platt are outperformed by the other two multi-label learning methods, indicating the importance of developing multi-label ranking methods for multi-label learning.…”
Section: Resultssupporting
confidence: 85%
“…This is the one-vs-one approach, as opposed to the one-vs-rest approach used by BM, therefore requiring |L| 2 classifiers as opposed to |L|. [7] accompanies each pairwise classifier with two probabilistic models to isolate the overlapping feature space. They cite a computational bottleneck for this method for large datasets.…”
Section: Related Workmentioning
confidence: 99%
“…Although they can work with textual datasets, they scale poorly with the number of labels and can fail to perform well on sparse data (Freund and Schapire 1999). This family of methods has been recognised as having high computational complexity (Petrovskiy 2006).…”
Section: Alternative Methodsmentioning
confidence: 99%
“…CLR ) is a well-known example (many other pairwise methods are specific to the label ranking problem, which is beyond the scope of this paper). In Petrovskiy (2006), each pairwise classifier is accompanied by two probabilistic models to isolate the overlapping attribute space. This adds further complexity, and causes a computational bottleneck on large datasets.…”
Section: Alternative Methodsmentioning
confidence: 99%