2017
DOI: 10.1371/journal.pone.0176769
|View full text |Cite
|
Sign up to set email alerts
|

Robust auto-weighted multi-view subspace clustering with common subspace representation matrix

Abstract: In many computer vision and machine learning applications, the data sets distribute on certain low-dimensional subspaces. Subspace clustering is a powerful technology to find the underlying subspaces and cluster data points correctly. However, traditional subspace clustering methods can only be applied on data from one source, and how to extend these methods and enable the extensions to combine information from various data sources has become a hot area of research. Previous multi-view subspace methods aim to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…-With the power mean incorporation strategy, the proposed JCD is robust against the lowquality views. And we show that the auto-weighted strategy (Huang et al 2019;Nie et al 2016;Shu et al 2017;Zhuge et al 2017) is a special case of power mean strategy. -We prove that the solution can be obtained by solving another problem with introduced variables and develop an efficient algorithm for optimization, which can be applied to large-scale multi-view semi-supervised classification.…”
Section: Introductionmentioning
confidence: 67%
See 2 more Smart Citations
“…-With the power mean incorporation strategy, the proposed JCD is robust against the lowquality views. And we show that the auto-weighted strategy (Huang et al 2019;Nie et al 2016;Shu et al 2017;Zhuge et al 2017) is a special case of power mean strategy. -We prove that the solution can be obtained by solving another problem with introduced variables and develop an efficient algorithm for optimization, which can be applied to large-scale multi-view semi-supervised classification.…”
Section: Introductionmentioning
confidence: 67%
“…The power mean strategy distinguishes the importance of various views according to the view loss, which enables the views with smaller losses to play more important roles in classification. The auto-weighted strategy has been widely adopted by recent works (Huang et al 2019;Nie et al 2016;Shu et al 2017;Zhuge et al 2017) to incorporate the losses of different views, which essentially is a special case of power mean strategy with p = 1 2 .…”
Section: Semi-supervised Learning With Discriminative Least Squares Rmentioning
confidence: 99%
See 1 more Smart Citation
“…However, all views are treated equally and the diversity among multi-view data is not explicitly considered in these methods [30–32]. Therefore, their performance may suffer from the graph of less informative views [34]. To overcome this limitation, Low-Rank Graph Optimization for Multi-View Dimensionality Reduction (LRGO-MVDR) is proposed in this section, which consists of the following two steps.…”
Section: Proposed Algorithmmentioning
confidence: 99%
“…In fact, different views often contribute unequally in practice. Therefore, one challenge is how to aggregate the strengths of various heterogeneous graphs by exploring the rich information among them, which certainly can lead to more accurate and robust performance than by treating each individual type of graph equally [34].…”
Section: Introductionmentioning
confidence: 99%