2021
DOI: 10.48550/arxiv.2104.01101
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast and Accurate Randomized Algorithms for Low-rank Tensor Decompositions

Linjian Ma,
Edgar Solomonik

Abstract: Low-rank Tucker and CP tensor decompositions are powerful tools in data analytics. The widely used alternating least squares (ALS) method, which solves a sequence of over-determined least squares subproblems, is inefficient for large and sparse tensors. We propose a fast and accurate sketched ALS algorithm for Tucker decomposition, which solves a sequence of sketched rank-constrained linear least squares subproblems. Theoretical sketch size upper bounds are provided to achieve O( )-relative error for each subp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…Nowadays, the HOOI algorithm is sometimes referred to as Tucker-ALS; examples can be found in Refs. [21,22,25]. Here we would like to point out that, though quite alike, the HOOI algorithm is essentially different from the classical ALS method in the way that the sub-problem is defined and solved.…”
Section: Discussion Of Hooi Algorithmmentioning
confidence: 99%
“…Nowadays, the HOOI algorithm is sometimes referred to as Tucker-ALS; examples can be found in Refs. [21,22,25]. Here we would like to point out that, though quite alike, the HOOI algorithm is essentially different from the classical ALS method in the way that the sub-problem is defined and solved.…”
Section: Discussion Of Hooi Algorithmmentioning
confidence: 99%
“…With the rapid growth in data volume, efficient stochastic tensor methods become increasingly important for higher-order data structures to boost scalability. These methods are largely based on sampling (Ma and Solomonik, 2021;Yang et al, 2021;Kolda and Hong, 2020), which accelerates the computation of over-determined least square problems (Battaglino et al, 2018;Larsen and Kolda, 2020) in ALS for dense (Ailon and Chazelle, 2006) and sparse (Eshragh et al, 2019) tensors by effective strategies, such as Fast Johnson-Lindenstrauss Transform (Ailon and Chazelle, 2006), leverage-based sampling (Eshragh et al, 2019), and sketching (Zhou et al, 2014). However, these algorithms only focus on making ALS steps less costly and require to load the full data into memory.…”
Section: Related Workmentioning
confidence: 99%
“…• Parallel Streaming TT Sketching (PSTT): Since SVD in Parallel-TTSVD can be computationally expensive, we can use randomized linear algebra to find ON bases that approximate the column space of tensor unfoldings. This algorithm is inspired by matrix sketching [21], Tucker sketching [39], randomized algorithms for CP and Tucker format [29], and TT sketching in a sequential manner [9]. Sketching algorithms are ideal for streaming data, where it is infeasible to store the tensor in cache.…”
mentioning
confidence: 99%
“…A common idea in this scenario for large matrices and tensors is sketching, where information about the matrix or tensor is obtained via matrix-vector multiplications. This idea is used for computing low-rank approximations of matrices [21], Tucker decomposition on tensors [29,38], and TT decomposition [9]. In particular, SVDs in TTSVD can be replaced by sketching and a randomized range finder [9].…”
mentioning
confidence: 99%