2021
DOI: 10.1155/2021/6639582
|View full text |Cite
|
Sign up to set email alerts
|

Low Rank Correlation Representation and Clustering

Abstract: Correlation learning is a technique utilized to find a common representation in cross-domain and multiview datasets. However, most existing methods are not robust enough to handle noisy data. As such, the common representation matrix learned could be influenced easily by noisy samples inherent in different instances of the data. In this paper, we propose a novel correlation learning method based on a low-rank representation, which learns a common representation between two instances of data in a latent subspac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…However, these methods are not efficient for our approach because of the correlational nature of our data within clusters. Clustered data naturally have depedence due to the reason that they share common property [29]; for example, people who come from the same region have shared culture and traditions and mostly share common medical facilities.…”
Section: Methodsmentioning
confidence: 99%
“…However, these methods are not efficient for our approach because of the correlational nature of our data within clusters. Clustered data naturally have depedence due to the reason that they share common property [29]; for example, people who come from the same region have shared culture and traditions and mostly share common medical facilities.…”
Section: Methodsmentioning
confidence: 99%
“…4.5. Convergence Study.Based on established standards of inexact augmented Lagrange multiplier (IALM) optimization strategy in several pieces of literature[5,[49][50][51], the objective function values of the J, Z, and E subproblems were expected to decrease monotonically in each iteration until convergence. Besides, the Z subproblem is known to have a closed-form solution…”
mentioning
confidence: 99%