2018
DOI: 10.1109/tkde.2017.2785795
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Label Learning with Global and Local Label Correlation

Abstract: It is well-known that exploiting label correlations is important to multi-label learning. Existing approaches either assume that the label correlations are global and shared by all instances; or that the label correlations are local and shared only by a data subset. In fact, in the real-world applications, both cases may occur that some label correlations are globally applicable and some are shared only in a local group of instances. Moreover, it is also a usual case that only partial labels are observed, whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
111
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 319 publications
(113 citation statements)
references
References 35 publications
2
111
0
Order By: Relevance
“…Another line of related works is multi-label learning with missing labels. Many approaches aim to recover missing labels by exploiting low rank structure or label correlations [1], [12], [29], [35]. In their settings, the labels, which are missing in some instances, are within the observed class label set.…”
Section: Related Workmentioning
confidence: 99%
“…Another line of related works is multi-label learning with missing labels. Many approaches aim to recover missing labels by exploiting low rank structure or label correlations [1], [12], [29], [35]. In their settings, the labels, which are missing in some instances, are within the observed class label set.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our approach to the state-of-the-art emotion tagging methods: MET [20], TRBM [19], FRBM [23], and CEDM [18]. We also compare it to several multi-label classification methods, i.e., GLOCAL [27] and CLP-RNN [11]. Experimental results are shown in Table 2.…”
Section: Comparisons To Related Workmentioning
confidence: 99%
“…Inspired by [20], we separate the whole training data ] ∈ R l×n m be the corresponding label output matrix, where f m,j,: ∈ R 1×n (j = 1, 2, . .…”
Section: Incorporating Local Label Correlationsmentioning
confidence: 99%
“…To validate the efficacy of the proposed framework, we conduct comparisons on nine common-used benchmark data sets, 1 which have been widely used in MLFS literature [15], [19], [20], [25], [26] To evaluate the performance of comparing methods, we employ two types of evaluation metrics in MLL, i.e., example-based and label-based [1]. Given a test data set…”
Section: Experimental Study a Data Setsmentioning
confidence: 99%
See 1 more Smart Citation