2019
DOI: 10.1137/18m1192408
|View full text |Cite
|
Sign up to set email alerts
|

A Geometric Description of Feasible Singular Values in the Tensor Train Format

Abstract: Tree tensor networks such as the tensor train format are a common tool for high dimensional problems. The associated multivariate rank and accordant tuples of singular values are based on different matricizations of the same tensor. While the behavior of such is as essential as in the matrix case, here the question about the feasibility of specific constellations arises: which prescribed tuples can be realized as singular values of a tensor and what is this feasible set? We first show the equivalence of the te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Thus, by (16) and (50)-(52), the values α i , β i , and γ i are eigenvalues of A, B, and A+B, respectively. Hence, by Horn's conjecture, (14) and (15) hold.…”
Section: [N]mentioning
confidence: 67%
See 3 more Smart Citations
“…Thus, by (16) and (50)-(52), the values α i , β i , and γ i are eigenvalues of A, B, and A+B, respectively. Hence, by Horn's conjecture, (14) and (15) hold.…”
Section: [N]mentioning
confidence: 67%
“…Horn's Conjecture has recently also been linked to singular values of matrix unfoldings in the Tensor Train format [14].…”
Section: On the Largest Multilinear Singular Values Of Higher-order Tmentioning
confidence: 99%
See 2 more Smart Citations
“…, r µ (and can thereby also ignore that the combination of the new γ and θ might not be feasible, cf. [25] 6 Behavior of the SALSA Filter A deeper understanding of the regularization utilized by SALSA and the reason for the lower bound σ min is provided by the filter as indicated by (3.7) for the matrix case and as defined by Corollary 5.8 for tensors. Throughout this section, we assume that the sampling is such that for the minimizer in Theorem 5.6 it (approximately) holds vec(N + (j)) = F vec((L T B(j) R T )), (6.1) which is true at last for P = I (cf.…”
Section: Stabilitymentioning
confidence: 99%