2004
DOI: 10.1007/978-3-540-24775-3_5
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Methods for Multi-labeled Classification

Abstract: Abstract. In this paper we present methods of enhancing existing discriminative classifiers for multi-labeled predictions. Discriminative methods like support vector machines perform very well for uni-labeled text classification tasks. Multi-labeled classification is a harder task subject to relatively less attention. In the multi-labeled setting, classes are often related to each other or part of a is-a hierarchy. We present a new technique for combining text features and features indicating relationships bet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
394
0
8

Year Published

2009
2009
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 566 publications
(403 citation statements)
references
References 8 publications
1
394
0
8
Order By: Relevance
“…A multi-label dataset D is therefore composed of n examples (x 1 , S 1 ), (x 2 , S 2 ), · · · , (x n , S n ). The multilabel problem is receiving increased attention and is relevant to many domains such as text classification [10,2], and genomics [19,16].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…A multi-label dataset D is therefore composed of n examples (x 1 , S 1 ), (x 2 , S 2 ), · · · , (x n , S n ). The multilabel problem is receiving increased attention and is relevant to many domains such as text classification [10,2], and genomics [19,16].…”
Section: Introductionmentioning
confidence: 99%
“…In this fashion, a single-label classifier can be employed to make single-label classifications, and these are then transformed back into multi-label representations. Prior problem transformation approaches have employed algorithms such as Support Vector Machines [2], Naive Bayes [5] and k Nearest Neighbor methods [19].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Each classifier /$ is learned from predictor variables and C$ data, and the results are combined to form multi-label prediction. This method, called binary relevance [6], is easily implementable, has low computational complexity and is fully parallelizable. Hence it is scalable to a large number of classes.…”
mentioning
confidence: 99%