2017
DOI: 10.48550/arxiv.1708.00132
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm

Masaaki Imaizumi,
Takanori Maehara,
Kohei Hayashi

Abstract: Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop an alternating optimization method with a ran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Let T 0 be initialized by the sequential second-order method as Algorithm 3 and {T l } lmax l=1 be the iterates produced by Algorithm 2 where l max is the maximum number of iterations, and the stepsize α = 0.12n −1 d * . There exist absolute constants C m,1 , C m,2 > 0 depending only on m such that if the sample size n satisfies This improves the existing result (Imaizumi et al, 2017) based on matricization and matrix nuclear norm penalization. Moreover, for the case m = 3, Barak and Moitra (2016) conjectures that, based on the reduction to Boolean satisfiability problem, O(d 3/2 ) is a lower bound for the sample size such that polynomial-time algorithm exists for exact tensor completion.…”
Section: Exact Recovery and Convergence Analysismentioning
confidence: 51%
See 1 more Smart Citation
“…Let T 0 be initialized by the sequential second-order method as Algorithm 3 and {T l } lmax l=1 be the iterates produced by Algorithm 2 where l max is the maximum number of iterations, and the stepsize α = 0.12n −1 d * . There exist absolute constants C m,1 , C m,2 > 0 depending only on m such that if the sample size n satisfies This improves the existing result (Imaizumi et al, 2017) based on matricization and matrix nuclear norm penalization. Moreover, for the case m = 3, Barak and Moitra (2016) conjectures that, based on the reduction to Boolean satisfiability problem, O(d 3/2 ) is a lower bound for the sample size such that polynomial-time algorithm exists for exact tensor completion.…”
Section: Exact Recovery and Convergence Analysismentioning
confidence: 51%
“…These prior works mostly focus on the methodology and algorithm designs without, or with rather limited, theoretical justification. Towards that end, Imaizumi et al (2017) 2020) established the statistically optimal convergence rates of tensor SVD by the fast higher-order orthogonal iteration algorithm in the TT-format.…”
Section: Introductionmentioning
confidence: 99%
“…This involves the derivation of an analytic expression for geodesics, as well as an expression for the Riemannian Hessian in the respective product manifolds. Another important motivation for phrasing GST as a randomly subsampled tensor completion problem is to bring it closer to potential analytical recovery guarantees common for related tensor completion problems [50][51][52][53][54][55][56][57], opening up a new research direction.…”
Section: Introductionmentioning
confidence: 99%