2022
DOI: 10.1016/j.neucom.2022.05.011
|View full text |Cite
|
Sign up to set email alerts
|

Robust landmark graph-based clustering for high-dimensional data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…As for improving the robustness of real-world data clustering tasks, it is currently widely adopted to use the robustness norm to measure the error between the original data and the reconstructed representation. For example, the L 1 -norm-based methods [27,28] and the method based on the L 21 -norm-based methods [29][30][31]. LSSC [27] uses the L 1 -norm to define a sparse coding problem to improve the robustness of the representation, RDCF [28] uses the L 1 -norm to minimize the error before and after the conceptual decomposition, and the L 21norm is used to select features to constrain row sparsity by enhancing the matrix and constrain the errors of the subspace representation and the original data in LSS [29] and LRR [30], respectively.…”
Section: Introductionmentioning
confidence: 99%
“…As for improving the robustness of real-world data clustering tasks, it is currently widely adopted to use the robustness norm to measure the error between the original data and the reconstructed representation. For example, the L 1 -norm-based methods [27,28] and the method based on the L 21 -norm-based methods [29][30][31]. LSSC [27] uses the L 1 -norm to define a sparse coding problem to improve the robustness of the representation, RDCF [28] uses the L 1 -norm to minimize the error before and after the conceptual decomposition, and the L 21norm is used to select features to constrain row sparsity by enhancing the matrix and constrain the errors of the subspace representation and the original data in LSS [29] and LRR [30], respectively.…”
Section: Introductionmentioning
confidence: 99%