2015
DOI: 10.1515/auom-2015-0024
|View full text |Cite
|
Sign up to set email alerts
|

Block Power Method for SVD Decomposition

Abstract: We present in this paper a new method to determine the k largest singular values and their corresponding singular vectors for real rectangular matrices A ∈ R n×m . Our approach is based on using a block version of the Power Method to compute an k-block SV D decomposition:, where Σ k is a diagonal matrix with the k largest non-negative, monotonically decreasing diagonal σ1 ≥ σ2 · · · ≥ σ k . U k and V k are orthogonal matrices whose columns are the left and right singular vectors of the k largest singular value… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 8 publications
0
19
0
Order By: Relevance
“…Figure 1: Geometric interpretation of one step of projector-splitting optimization procedure: the gradient step an the retraction of the high-rank matrix X i +∇F (X i ) to the manifold of low-rank matrices M d . also quite intuitive: instead of computing the full SVD of X i + ∇F (X i ) according to the gradient projection method, we use just one step of the block power numerical method (Bentbib and Kanber, 2015) which computes the SVD, what reduces the computational complexity.…”
Section: Discussionmentioning
confidence: 99%
“…Figure 1: Geometric interpretation of one step of projector-splitting optimization procedure: the gradient step an the retraction of the high-rank matrix X i +∇F (X i ) to the manifold of low-rank matrices M d . also quite intuitive: instead of computing the full SVD of X i + ∇F (X i ) according to the gradient projection method, we use just one step of the block power numerical method (Bentbib and Kanber, 2015) which computes the SVD, what reduces the computational complexity.…”
Section: Discussionmentioning
confidence: 99%
“…Then, the contribution of each derived RF precoding vector was iteratively cancelled to form the residual optimal covariance matrix G n (Line 9 of Algorithm 3). Because only the first singular vector was required, we used the power SVD method [13] (Algorithm 4) to avoid the complete SVD processing. The accuracy of u (Np) increases along with N p .…”
Section: B Kkt-condition-based Algorithmmentioning
confidence: 99%
“…13: end for Output: FRF Algorithm 4 Power Method for SVD [13] Input: A square matrix G ∈ C n×n , iteration Np 1:…”
Section: B Kkt-condition-based Algorithmmentioning
confidence: 99%
“…Following the sole objective of image compression using SVD, the most problem is which K rank to use for giving a better image compression. For this reason, the method presented in El Asnaoui et al, [14], introduces two new approaches: The first one is an improvement of the Block Truncation Coding method that overcomes the disadvantages of the classical Block Truncation Coding, while the second one describes how to obtain a new rank of SVD method, which gives a better image compression.…”
Section: Lossy Compression Is Another Type Of Image Compressionmentioning
confidence: 99%