2014
DOI: 10.1007/s10489-014-0600-7
|View full text |Cite
|
Sign up to set email alerts
|

Low rank approximation with sparse integration of multiple manifolds for data representation

Abstract: Manifold regularized techniques have been extensively exploited in unsupervised learning like matrix factorization whose performance is heavily affected by the underlying graph regularization. However, there exist no principled ways to select reasonable graphs under the matrix decomposition setting, particularly in multiple heterogeneous graph sources. In this paper, we deal with the issue of searching for the optimal linear combination space of multiple graphs under the low rank matrix approximation model. Sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 28 publications
0
17
0
Order By: Relevance
“…In fact such types of experiments have become a standard practice in the PCA community [5], [14], [4], [6], [15]. We perform our clustering experiments on 3 benchmark databases: CMU PIE, ORL and COIL20 using two opensource toolboxes: the UNLocBoX [16] for the optimization part and the GSPBox [17] for the graph creation.…”
Section: Resultsmentioning
confidence: 99%
“…In fact such types of experiments have become a standard practice in the PCA community [5], [14], [4], [6], [15]. We perform our clustering experiments on 3 benchmark databases: CMU PIE, ORL and COIL20 using two opensource toolboxes: the UNLocBoX [16] for the optimization part and the GSPBox [17] for the graph creation.…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, additional structures from low-dimensional data can be utilized as prior knowledge to enhance the representability of the models [62,63,64]. Discrete graphs are also utilized to incorporate data manifold information into the dimensionality reduction framework [65,66,67,68,69,70,71,72,73].…”
Section: Geometric Methods For Generic Objectsmentioning
confidence: 99%
“…Note that only RPCA and our proposed model leverage convexity and enjoy a unique global optimum with guaranteed convergence. IV. COMPARISON WITH RELATED WORKS The main differences between our model (1) and the various state-of-the-art factorized PCA models [7], [19], [12], [8], [15] are, as summarized in Table I, the following. Non-factorized model: Instead of explicitly learning the principal directions U and principal components Q, it learns their product, i.e. the low-rank matrix L. Hence, (1) is a non-factorized PCA model.…”
Section: Related Work: Factorized Pca Modelsmentioning
confidence: 96%