2019
DOI: 10.1109/tit.2019.2915593
|View full text |Cite
|
Sign up to set email alerts
|

Provable Subspace Clustering: When LRR Meets SSC

Abstract: Sparse Subspace Clustering (SSC) and Low-Rank Representation (LRR) are both considered as the state-of-the-art methods for subspace clustering. The two methods are fundamentally similar in that both are convex optimizations exploiting the intuition of "Self-Expressiveness". The main difference is that SSC minimizes the vector 1 norm of the representation matrix to induce sparsity while LRR minimizes nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simult… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
84
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 79 publications
(85 citation statements)
references
References 41 publications
(49 reference statements)
1
84
0
Order By: Relevance
“…This problem can be solved efficiently using ADMM optimization procedure [11], [65]. Low-Rank Sparse Subspace Clustering (LRSSC) [29] requires that the representation matrix C is simultaneously lowrank and sparse. LRSSC solves the following problem: (9) where λ and τ are rank and sparsity regularization constants, respectively.…”
Section: A Related Workmentioning
confidence: 99%
“…This problem can be solved efficiently using ADMM optimization procedure [11], [65]. Low-Rank Sparse Subspace Clustering (LRSSC) [29] requires that the representation matrix C is simultaneously lowrank and sparse. LRSSC solves the following problem: (9) where λ and τ are rank and sparsity regularization constants, respectively.…”
Section: A Related Workmentioning
confidence: 99%
“…Neither a sparse nor dense similarity matrix reveals a comprehensive correlation structure among samples due to their conflicted nature [49], [50], [51]. Consequently, to achieve trade-off between sparsity and the grouping effect, numerous mixed norms, e.g., trace Lasso [52] and elastic net [33], have been integrated into the optimization function.…”
Section: A Calculating Self-representationmentioning
confidence: 99%
“…It relies on the assumption that high-dimensional data points lie in a union of multiple low-dimensional subspaces and aims to group data points into corresponding clusters simultaneously [12]. Owing to its promising performance and good interpretability, a number of clustering algorithms based on subspace clustering have been proposed [31,12,20,36,15,10]. For example, Sparse Subspace Clustering (SSC) [12] obtains a sparsest subspace coefficient matrix for clustering.…”
Section: Introductionmentioning
confidence: 99%