2015
DOI: 10.1137/140971464
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations

Abstract: We derive and study a Gauss-Newton method for computing a symmetric lowrank product XX T , where X ∈ R n×k for k < n, that is the closest to a given symmetric matrix A ∈ R n×n in Frobenius norm. When A = B T B (or BB T ), this problem essentially reduces to finding a truncated singular value decomposition of B. Our Gauss-Newton method, which has a particularly simple form, shares the same order of iteration-complexity as a gradient method when k n, but can be significantly faster on a wide range of problems. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 39 publications
(42 citation statements)
references
References 32 publications
0
42
0
Order By: Relevance
“…In particular, the efficiency may not be significantly improved when working with thousands of CPU cores. From the perspective of optimization, a series of fast algorithms for solving (2.9) was proposed in [65,64,93,96], whose essential parts can be divided into two steps, updating a subspace to approximate the eigenvector space better and extracting eigenvectors by the Rayleigh-Ritz (RR) process. The main numerical algebraic technique for updating subspaces is usually based on the Krylov subspace, which constructs a series of orthogonal bases sequentially.…”
Section: Linear Eigenvalue Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, the efficiency may not be significantly improved when working with thousands of CPU cores. From the perspective of optimization, a series of fast algorithms for solving (2.9) was proposed in [65,64,93,96], whose essential parts can be divided into two steps, updating a subspace to approximate the eigenvector space better and extracting eigenvectors by the Rayleigh-Ritz (RR) process. The main numerical algebraic technique for updating subspaces is usually based on the Krylov subspace, which constructs a series of orthogonal bases sequentially.…”
Section: Linear Eigenvalue Problemmentioning
confidence: 99%
“…More importantly, the model allows one to design an algorithm that uses only matrix-matrix multiplication. A Gauss-Newton algorithm for calculating low rank decomposition is developed in [65]. When the matrix to be decomposed is of low rank, this algorithm can be more effective while its complexity is similar to the gradient method but with Q linear convergence.…”
Section: Linear Eigenvalue Problemmentioning
confidence: 99%
“…Oiter (rmn), Opre=unknown, Oconv(1/µT ) (IALM 13 ) Lin et al [91] Partial SVD, Linear Time SVD [198] Limited Memory SVD (LMSVD 14 ) [98] Symmetric Low-Rank Product-Gauss-Newton [99] Alternating Direction Method (ADM)…”
Section: -15mentioning
confidence: 99%
“…First, new Singular Value Decomposition (SVD) solutions have been developed to make the iterations as efficient as possible and to deal with the fact that the standard SVD solution fails if the data are corrupted by anything other than small noise. For example, approximated SVD solutions exist to avoid full SVD computations in order to reduce computation time such as partial SVD algorithms [170], linear time SVD algorithms [303], limited memory SVD algorithms [178], symmetric low-rank product-Gauss-Newton algorithms [179], Block Lanczos with Warm Start (BLWS) algorithms [172], and randomized SVD algorithms [83] [335] [145]. Moreover, a lot of video data arrive sequentially over time and the subspace in which the data lie can change with time.…”
Section: Introduction P Rincipal Component Analysis Was Introducedmentioning
confidence: 99%