2018
DOI: 10.1137/17m1140480
|View full text |Cite
|
Sign up to set email alerts
|

Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD

Abstract: We propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the Matrix Product Operator (MPO) format, also called the Tensor Train Matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the randomized SVD algorithm that is able to compute dominant singular values and their corresponding singular vectors. In contrast to the state-of-the-art tensorbased alternating least squares SVD (ALS-SVD) and modified al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 30 publications
0
26
0
Order By: Relevance
“…To illustrate this, we report the runtime of the HOSVD, R-HOSVD, STHOSVD, and R-STHOSVD algorithms on X as the size of each dimension increases. For inputs, we fixed the target rank to be (5,5,5,5,5), the oversampling parameter as p = 5, and we used processing order ρ = [1,2,3,4,5] in the sequential algo- Figure 1. Left: Relative approximation error for 5-mode Hilbert tensor X ∈ R 25×25×25×25×25 defined in (6.1), with target rank (r, r, r, r, r) and oversampling parameter p = 5.…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…To illustrate this, we report the runtime of the HOSVD, R-HOSVD, STHOSVD, and R-STHOSVD algorithms on X as the size of each dimension increases. For inputs, we fixed the target rank to be (5,5,5,5,5), the oversampling parameter as p = 5, and we used processing order ρ = [1,2,3,4,5] in the sequential algo- Figure 1. Left: Relative approximation error for 5-mode Hilbert tensor X ∈ R 25×25×25×25×25 defined in (6.1), with target rank (r, r, r, r, r) and oversampling parameter p = 5.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Right: Actual relative error for X from the R-HOSVD and R-STHOSVD algorithms compared to the calculated error bound as the target rank (r, r, r, r, r) increases. Both algorithms use oversampling parameter p = 5, and R-STHOSVD uses the processing order ρ = [1,2,3,4,5].…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the TT-cross algorithm is usually slow as it is likely that it needs to be restarted when the desired accuracy is not met. A recent alternative for the conversion of a sparse matrix to a tensor network is described in [3]. This newly-proposed algorithm converts a given sparse matrix directly into a tensor network without any dyadic decomposition.…”
Section: Generic Matrix C(t)mentioning
confidence: 99%
“…The TT-cross approximation algorithm on the other hand is too slow in order to deploy it for real-time applications. Finally, the alternative matrix to tensor network conversion algorithm reported in [3] is best suited for sparse matrices, while C(t) will contain many nonzero entries. Fortunately, it is possible to derive an efficient algorithm that exploits the repeated Kronecker product structure to construct an exact tensor network representation of C(t).…”
Section: Mimo Volterra Output Model Matrix C(t)mentioning
confidence: 99%