2022
DOI: 10.1109/tpami.2021.3086895
|View full text |Cite
|
Sign up to set email alerts
|

A Concise Yet Effective Model for Non-Aligned Incomplete Multi-View and Missing Multi-Label Learning

Abstract: In real-world applications, learning from data with multi-view and multi-label inevitably confronts with three challenges: missing labels, incomplete views, and non-aligned views. Existing methods mainly concern the first two and commonly need multiple assumptions in attacking them, making even state-ofthe-arts also involve at least two explicit hyper-parameters in their objectives such that model selection is quite difficult. More toughly, these will encounter a failure in dealing with the third challenge, le… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(22 citation statements)
references
References 97 publications
0
22
0
Order By: Relevance
“…Consequently, the total time complexity of MR is O(γ(N 3 +T N 2 )), where γ is the iteration number. It should be noted that it is not difficult to speed up the algorithm by using ADMM [34], but this is beyond the scope of this work.…”
Section: E Complexity Analysismentioning
confidence: 99%
“…Consequently, the total time complexity of MR is O(γ(N 3 +T N 2 )), where γ is the iteration number. It should be noted that it is not difficult to speed up the algorithm by using ADMM [34], but this is beyond the scope of this work.…”
Section: E Complexity Analysismentioning
confidence: 99%
“…Multilabel classification results are obtained by a weighted combination of decisions from multiple sources. The classification fusion methods generally consider that although the various views are not explicitly aligned, they can still be implicitly connected through public or shared labels [15]. Nevertheless, intuitively, each view has only a subset of the corresponding labels, meaning each view can only catch a subset of common or shared label data.…”
Section: A Multi-view Multi-label Learningmentioning
confidence: 99%
“…3) The non-aligned multi-view learning problem [15]. In most multi-view learning methods, it is often explicitly or implicitly assumed that the view samples are uniformly aligned, but in reality, it is often difficult to obtain fully consistent multi-view information.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(Zhang et al 2018) leverages matrix factorization to learn a shared subspace representation, and it simultaneously employs Hilbert-Schmidt independence criterion to further remain the consensuses on the shared representation. Besides the above methods, some recent methods have been proposed to learn from multi-view data with weak labels, such as (Tan et al 2018;Wu et al 2019;Li and Chen 2021).…”
Section: Multi-view Multi-label Learning (Mvml)mentioning
confidence: 99%