2021
DOI: 10.48550/arxiv.2110.03975
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tensor train completion: local recovery guarantees via Riemannian optimization

Abstract: In this work we estimate the number of randomly selected elements of a tensor that with high probability guarantees local convergence of Riemannian gradient descent for tensor train completion. We derive a new bound for the orthogonal projections onto the tangent spaces based on the harmonic mean of the unfoldings' singular values and introduce a notion of core coherence for tensor trains. We also extend the results to tensor train completion with side information and obtain the corresponding local convergence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…We would also like to add a few words about the bump in the phase transition curves that we observed for m n. In [75], such bump can be recognized on the phase plot corresponding to Riemannian tensor train completion with side information of a 10-dimensional tensor. At the same time, the results of [76] tell us that Riemannian gradient descent converges locally if the number of samples |Ω| exceeds a certain threshold that depends on m and is independent of n, and this behavior is indeed seen on the phase plots for larger values of n. However, random initialization that is used in [75] (and in this paper too) certainly does not put the initial condition into the basin of local attraction. Recent results on non-convex optimization [82][83][84] show that gradient descent converges globally in certain problems (including matrix completion) when initialized randomly, provided, of course, that |Ω| is large enough.…”
Section: Discussionmentioning
confidence: 64%
See 1 more Smart Citation
“…We would also like to add a few words about the bump in the phase transition curves that we observed for m n. In [75], such bump can be recognized on the phase plot corresponding to Riemannian tensor train completion with side information of a 10-dimensional tensor. At the same time, the results of [76] tell us that Riemannian gradient descent converges locally if the number of samples |Ω| exceeds a certain threshold that depends on m and is independent of n, and this behavior is indeed seen on the phase plots for larger values of n. However, random initialization that is used in [75] (and in this paper too) certainly does not put the initial condition into the basin of local attraction. Recent results on non-convex optimization [82][83][84] show that gradient descent converges globally in certain problems (including matrix completion) when initialized randomly, provided, of course, that |Ω| is large enough.…”
Section: Discussionmentioning
confidence: 64%
“…Completion and factorization problems with different kinds of auxiliary information have been studied in the literature for CP [66][67][68][69][70][71][72][73] and Tucker [74] decompositions. Side information, as we defined it, received less attention: it was used for TT completion [75,76], Tucker completion [77], and we have not seen such papers for the CP decomposition. Similar formulations appear in kernelized matrix completion [78,79].…”
Section: Introductionmentioning
confidence: 99%
“…Intuitively the smaller |Ω|, the more difficult it is to recover A from A Ω by solving (1). However, the minimal number of samples needed is not known [6].…”
Section: Introductionmentioning
confidence: 99%