2020
DOI: 10.1137/19m1299335
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Regression Using Low-Rank and Sparse Tucker Decompositions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…[CRY19] proposed projected gradient descent algorithms with respect to the tensors, which have larger computation and memory footprints than the factored gradient descent approaches taken in this paper. [ARB20] proposed a tensor regression model where the tensor is simultaneously low-rank and sparse in the Tucker decomposition. A concurrent work [LZ21] proposed a Riemannian Gauss-Newton algorithm, and obtained an impressive quadratic convergence rate for tensor regression (see Table 2).…”
Section: Additional Related Workmentioning
confidence: 99%
“…[CRY19] proposed projected gradient descent algorithms with respect to the tensors, which have larger computation and memory footprints than the factored gradient descent approaches taken in this paper. [ARB20] proposed a tensor regression model where the tensor is simultaneously low-rank and sparse in the Tucker decomposition. A concurrent work [LZ21] proposed a Riemannian Gauss-Newton algorithm, and obtained an impressive quadratic convergence rate for tensor regression (see Table 2).…”
Section: Additional Related Workmentioning
confidence: 99%
“…Here, we use a mode sparsity constraint, which induces the sparsity of each element of the CP-decomposition for every activation tensors independently. In tensor regression, this regularization is also used in [36] for instance.…”
Section: Tensor Convolutional Dictionary Learning With Cp Low-rank Ac...mentioning
confidence: 99%
“…First, the low-rank tensor estimation has attracted much recent attention from machine learning and statistics communities. Various methods were proposed, including the convex relaxation (Mu et al, 2014;Raskutti et al, 2019;Tomioka et al, 2011), projected gradient descent (Rauhut et al, 2017;Chen et al, 2019a;Ahmed et al, 2020;Yu and Liu, 2016), gradient descent on the factorized model (Han et al, 2020b;Cai et al, 2019;Hao et al, 2020), alternating minimization (Zhou et al, 2013;Jain and Oh, 2014;Liu and Moitra, 2020;Xia et al, 2020), and importance sketching (Zhang et al, 2020a). Moreover, when the target tensor has order two, our problem reduces to the widely studied low-rank matrix recovery/estimation (Recht et al, 2010;Li et al, 2019;Ma et al, 2019;Sun and Luo, 2015;Tu et al, 2016;Wang et al, 2017;Zhao et al, 2015;Zheng and Lafferty, 2015;Charisopoulos et al, 2021;Luo et al, 2020;Bauch et al, 2021).…”
Section: Related Literaturementioning
confidence: 99%
“…In both tensor regression and SVD, the estimation upper bounds in Theorems 3 and 4 match the lower bounds in the literature ((Zhang and Xia, 2018, Theorem 3) and (Zhang et al, 2020a, Theorem 5)), which shows that RGN achieves the minimax optimal rate of estimation error. Compared to existing algorithms in the literature on tensor regression and SVD (Ahmed et al, 2020;Chen et al, 2019a;Han et al, 2020b;Zhang and Xia, 2018), RGN is the first to achieve the minimax rate-optimal estimation error with only a double-logarithmic number of iterations attributed to its second-order convergence. Suppose d, r are fixed and the condition number κ is of order Op1q, we note that the sample size requirement (n ě Op ?…”
Section: Tensor Svdmentioning
confidence: 99%
See 1 more Smart Citation