2021
DOI: 10.48550/arxiv.2104.12031
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low-rank Tensor Estimation via Riemannian Gauss-Newton: Statistical Optimality and Second-Order Convergence

Abstract: In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We propose a Riemannian Gauss-Newton (RGN) method with fast implementations for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first quadratic convergence guarantee o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 94 publications
0
6
0
Order By: Relevance
“…[ARB20] proposed a tensor regression model where the tensor is simultaneously low-rank and sparse in the Tucker decomposition. A concurrent work [LZ21] proposed a Riemannian Gauss-Newton algorithm, and obtained an impressive quadratic convergence rate for tensor regression (see Table 2). Compared with ScaledGD, this algorithm runs in the tensor space, and the update rule is more sophisticated with higher per-iteration cost by solving a least-squares problem and performing a truncated HOSVD every iteration.…”
Section: Additional Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…[ARB20] proposed a tensor regression model where the tensor is simultaneously low-rank and sparse in the Tucker decomposition. A concurrent work [LZ21] proposed a Riemannian Gauss-Newton algorithm, and obtained an impressive quadratic convergence rate for tensor regression (see Table 2). Compared with ScaledGD, this algorithm runs in the tensor space, and the update rule is more sophisticated with higher per-iteration cost by solving a least-squares problem and performing a truncated HOSVD every iteration.…”
Section: Additional Related Workmentioning
confidence: 99%
“…To proceed, we need to control (U 0 , V 0 , W 0 ) • S 0 − X ⋆ F , where (U 0 , V 0 , W 0 ) • S 0 is the output of HOSVD, which has been considered in [LZ21,HWZ20,ZLRY20]. Invoking the result in [HWZ20, Appendix D.2…”
Section: D2 Proof Of Spectral Initialization (Lemma 5)mentioning
confidence: 99%
See 1 more Smart Citation
“…Tensor data are routinely employed in data and information sciences to model (structured) multi-dimensional objects [3,4,5,6,7,8,9]. In many practical scenarios of interest, however, we do not have full access to a large-dimensional tensor of interest, as only a sampling of its entries are revealed to us; yet we would still wish to reliably infer all missing data.…”
Section: A Noisy Low-rank Tensor Completionmentioning
confidence: 99%
“…Broadly speaking, tensor RPCA concerns with reconstructing a high-dimensional tensor with certain low-dimensional structures from incomplete and corrupted observations. Pertaining to works that deal with the Tucker decomposition, [XY19] proposed a gradient descent based algorithm for tensor completion, [TMPB + 22,TMC22] proposed scaled gradient descent algorithms for tensor regression and tensor completion (which our algorithm also adopts), [LZ21] proposed a Gauss-Newton algorithm for tensor regression that achieves quadratic convergence, [WCW21] proposed a Riemannian gradient method with entrywise convergence guarantees, and [ARB20] studied tensor regression assuming the underlying tensor is simultaneously low-rank and sparse.…”
Section: Related Workmentioning
confidence: 99%