2020
DOI: 10.48550/arxiv.2006.12798
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Note: low-rank tensor train completion with side information based on Riemannian optimization

Abstract: We consider the low-rank tensor train completion problem when additional side information is available in the form of subspaces that contain the mode-k fiber spans. We propose an algorithm based on Riemannian optimization to solve the problem. Numerical experiments show that the proposed algorithm requires far fewer known entries to recover the tensor compared to standard tensor train completion methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 8 publications
2
3
0
Order By: Relevance
“…Indeed, a rank-3 CP tensor of size 300×300×300 can be completed from 1% of its elements, and only 0.004% are needed when 30-dimensional side-information subspaces are available for all of its fibers (columns, rows, and tubes); for a larger 1000 × 1000 × 1000 tensor with the same side information this reduces to 0.0001%. Similar behavior has been observed for Riemannian TT completion with side information [75].…”
Section: Performance Of Completionsupporting
confidence: 81%
See 3 more Smart Citations
“…Indeed, a rank-3 CP tensor of size 300×300×300 can be completed from 1% of its elements, and only 0.004% are needed when 30-dimensional side-information subspaces are available for all of its fibers (columns, rows, and tubes); for a larger 1000 × 1000 × 1000 tensor with the same side information this reduces to 0.0001%. Similar behavior has been observed for Riemannian TT completion with side information [75].…”
Section: Performance Of Completionsupporting
confidence: 81%
“…We would also like to add a few words about the bump in the phase transition curves that we observed for m n. In [75], such bump can be recognized on the phase plot corresponding to Riemannian tensor train completion with side information of a 10-dimensional tensor. At the same time, the results of [76] tell us that Riemannian gradient descent converges locally if the number of samples |Ω| exceeds a certain threshold that depends on m and is independent of n, and this behavior is indeed seen on the phase plots for larger values of n. However, random initialization that is used in [75] (and in this paper too) certainly does not put the initial condition into the basin of local attraction. Recent results on non-convex optimization [82][83][84] show that gradient descent converges globally in certain problems (including matrix completion) when initialized randomly, provided, of course, that |Ω| is large enough.…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…This behavior is further well-aligned with the numerical experiments carried out in [40], where a modified RTTC algorithm [33] was introduced to solve (25). Fig.…”
Section: Proofssupporting
confidence: 72%