2021
DOI: 10.1016/j.patcog.2021.108142
|View full text |Cite
|
Sign up to set email alerts
|

Learnable low-rank latent dictionary for subspace clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…It is obvious that the above equation ( 10) is the Lyapunov matrix equation, and adopting Bartels-Stewart algorithm can ensure that it has a closed-form solution [7].…”
Section: Optimization Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…It is obvious that the above equation ( 10) is the Lyapunov matrix equation, and adopting Bartels-Stewart algorithm can ensure that it has a closed-form solution [7].…”
Section: Optimization Algorithmmentioning
confidence: 99%
“…The proposed robust subspace clustering approach FCSC is compared with the subspace clustering methods including SSC [2], LSR [5], LRR [3], Correlation Adaptive Subspace Segmentation(CASS) [10], Latent Low Rank Representation (LatLRR) [11], Block Diagonal Representation (BDR) [12], Implicit Block Diagonal Low Rank Representation (IBDLR) [13]. In our experiments, four benchmark datasets are utilized to evaluate our proposed FCSC.They are COIL-20 dataset, Extended-YaleB dataset, USPS dataset, and Robust-NUST dataset [7], [14]. Furthermore, the clustering ACCuracy (ACC) and Normalized Mutual Information (NMI) are employed, which are frequently used in experiments on subspace clustering [7].…”
Section: Preliminarymentioning
confidence: 99%
See 1 more Smart Citation
“…The self-expressive-based subspace clustering method has shown its superior performance in machine learning and computer vision (Lu et al 2018;Li et al 2020a;Chen et al 2021c), but there are two non-negligible drawbacks. One is that it may fail to discover subspace structure sufficiently when the original data is directly utilized to acquire the similarity matrix (Liu and Yan 2011;Xu et al 2021). The other is that the self-expressive strategy is hard to describe the linear relationship between samples for real-world data accurately, it makes the learned similarity matrix may be inaccurate.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Evaluation Measures: To evaluate the performance of different subspace clustering methods, we employ two popular metrics to evaluate clustering performance, each of which favors different properties of clustering, including Accuracy (ACC) and Normalized Mutual Information (NMI), the detailed definitions of ACC and NMI can be seen in (Xu et al 2021). The above two metrics both lie in the range of [0, 1], and the higher value indicates better clustering performance.…”
Section: Experimental Settingsmentioning
confidence: 99%