2019
DOI: 10.1609/aaai.v33i01.33015016
|View full text |Cite
|
Sign up to set email alerts
|

Partial Multi-Label Learning by Low-Rank and Sparse Decomposition

Abstract: Multi-Label Learning (MLL) aims to learn from the training data where each example is represented by a single instance while associated with a set of candidate labels. Most existing MLL methods are typically designed to handle the problem of missing labels. However, in many real-world scenarios, the labeling information for multi-label data is always redundant , which can not be solved by classical MLL methods, thus a novel Partial Multi-label Learning (PML) framework is proposed to cope with such problem, i.e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(34 citation statements)
references
References 22 publications
0
34
0
Order By: Relevance
“…For example, (Xie and Huang 2018) propose two effective methods PML-lc and PML-fp by introducing a confidence value for each candidate label. The decomposition scheme is utilized to tackle PML data in (Lijuan Sun and Jin 2019). PARTICLE (Fang and Zhang 2019) identifies the credible labels with high labeling confidences by employing an iterative label propagation procedure.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, (Xie and Huang 2018) propose two effective methods PML-lc and PML-fp by introducing a confidence value for each candidate label. The decomposition scheme is utilized to tackle PML data in (Lijuan Sun and Jin 2019). PARTICLE (Fang and Zhang 2019) identifies the credible labels with high labeling confidences by employing an iterative label propagation procedure.…”
Section: Related Workmentioning
confidence: 99%
“…In multi-label learning, a common assumption is that there existing the label correlations among different labels (Zhu, Kwok, and Zhou 2017;Lijuan Sun and Jin 2019) and the feature mapping matrix U is thus linearly dependent. The low-rank assumption is thus naturally used to capture this intrinsic property of the classifier.…”
Section: The Pml-ni Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the confidence values would be error-prone especially when noisy label dominates since it ignores the irrelevance of the non-candidate labels. Low-rank assumption is employed to disambiguate noisy labels by using sparse matrix decomposition [4]. Another recent work [5] proposes to deal with PML problem by using a two-stage strategy.…”
Section: Related Workmentioning
confidence: 99%
“…A reasonable result is achieved, while the estimation of label confidence scores is error-prone especially with the high proportion of false positive labels since it ignores the irrelevance of the noncandidate labels. Some researchers employ sparse decomposition scheme based on a low-rank assumption to identify the noise labels for disambiguation [4]. Another recent work of PML in [5] tries to estimate the label confidence scores of candidate labels by employing iterative label propagation; and then, a credible label elicitation techniques is employed to identify the credible labels which can be used to induce a predictive model for making final prediction on unseen instances.…”
Section: Introductionmentioning
confidence: 99%