2012
DOI: 10.1007/s10994-011-5276-1
|View full text |Cite
|
Sign up to set email alerts
|

Compressed labeling on distilled labelsets for multi-label learning

Abstract: Directly applying single-label classification methods to the multi-label learning problems substantially limits both the performance and speed due to the imbalance, dependence and high dimensionality of the given label matrix. Existing methods either ignore these three problems or reduce one with the price of aggravating another. In this paper, we propose a {0, 1} label matrix compression and recovery method termed "compressed labeling (CL)" to simultaneously solve or at least reduce these three problems. CL f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(30 citation statements)
references
References 59 publications
0
30
0
Order By: Relevance
“…Recently, many algorithms that aim at solving the high dimensionality problem meanwhile capturing correlations among labels have been proposed. In [24], a compressed labeling (CL) method has been proposed to solve the imbalance, dependency and high dimensionality of the label space in multilabel learning. Zhou and Tao [25] propose a multilabel subspace ensemble (MSE) method to deal with the exponential-sized output space of multilabel learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, many algorithms that aim at solving the high dimensionality problem meanwhile capturing correlations among labels have been proposed. In [24], a compressed labeling (CL) method has been proposed to solve the imbalance, dependency and high dimensionality of the label space in multilabel learning. Zhou and Tao [25] propose a multilabel subspace ensemble (MSE) method to deal with the exponential-sized output space of multilabel learning.…”
Section: Related Workmentioning
confidence: 99%
“…Content may change prior to final publication. Err KLD y y g R i j y W (24) Thus, the calculation of 螖wkj is expressed in (25), (26), (27), (28), (29) , (30) and (31) …”
Section: Co-evolutionary Learning Of Mlhnmentioning
confidence: 99%
“…The MUR is directly inherited from our previous work [15]. According to [15][20] [21], MUR decreases the objective function (8), but it converges slowly and does not guarantee convergence to any local minimum. To overcome the slow convergence problem of MUR, Lin [9] first proposed to apply projected gradient descent method (PGD) to optimizing NMF.…”
Section: Algorithmmentioning
confidence: 99%
“…There has been recent interest in multi-label methods that work in transformed label spaces [16][17][18][19][20][21], primarily based on low-dimensional projections of high dimensional label vectors. For example, random projections [16], maximum eigenvalue projections [18,17], and Gaussian random projections [21] provide techniques for mapping high dimensional label vectors to low dimensional codewords to improve the efficiency of multi-label learning.…”
Section: Introductionmentioning
confidence: 99%
“…For example, random projections [16], maximum eigenvalue projections [18,17], and Gaussian random projections [21] provide techniques for mapping high dimensional label vectors to low dimensional codewords to improve the efficiency of multi-label learning. Canonical correlation analysis (CCA) has also been considered for relating inputs to label projections [20].…”
Section: Introductionmentioning
confidence: 99%