2015
DOI: 10.1016/j.jcp.2014.10.009
|View full text |Cite
|
Sign up to set email alerts
|

Randomized interpolative decomposition of separated representations

Abstract: We introduce an algorithm to compute tensor Interpolative Decomposition (tensor ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ǫ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. Tensor ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 59 publications
0
20
0
Order By: Relevance
“…If ALS is used to reduce the separation rank and d is the dimension, r is the maximum separation rank (after reductions) during the iteration, and M is the maximum number of components in each direction, then the computational cost of each rank reduction step can be estimated as O r 4 · M · d · N iter , where N iter is the number of iterations required by the ALS algorithm to converge. The computational cost of the CTD-ID algorithm is estimated as O r 3 · M · d and, if it is used instead of ALS, the reduction step is faster by a factor of O (r · N iter ) [8]. We note that, while linear in dimension, Algorithm 2 may require significant computational resources due to the cubic (or quartic for ALS) dependence on the separation rank r.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…If ALS is used to reduce the separation rank and d is the dimension, r is the maximum separation rank (after reductions) during the iteration, and M is the maximum number of components in each direction, then the computational cost of each rank reduction step can be estimated as O r 4 · M · d · N iter , where N iter is the number of iterations required by the ALS algorithm to converge. The computational cost of the CTD-ID algorithm is estimated as O r 3 · M · d and, if it is used instead of ALS, the reduction step is faster by a factor of O (r · N iter ) [8]. We note that, while linear in dimension, Algorithm 2 may require significant computational resources due to the cubic (or quartic for ALS) dependence on the separation rank r.…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately, the Frobenius norm is only weakly sensitive to changes of individual entries. The alternative s-norm (see [8,Section 4]), i.e., the largest s-value of the rank one separated approximation to the tensor in CTD format, is better in some situations. In particular, it allows the user to lower the tolerance to below single precision (within a double precision environment).…”
Section: 2mentioning
confidence: 99%
“…A literature search also reveals attempts to extend the random sampling approach to tensors based on CP decomposition and Tucker decomposition. [33][34][35][36]…”
Section: Prior Work On Tensor Decompositionsmentioning
confidence: 99%
“…Tsourakakis provided numerical examples of Tucker decomposition with the random sampling method. A literature search also reveals attempts to extend the random sampling approach to tensors based on CP decomposition and Tucker decomposition …”
Section: Introductionmentioning
confidence: 99%
“…Alternative reduction algorithms. Using Algorithm 1 or 2, half of the significant digits are lost due to poor conditioning (see examples in [14] and [40]). In order to identify "best" linear independent terms we can design a matrix with a better condition number if instead of the functions of the mixture we use a "dual" family for computing inner products.…”
Section: Initializationmentioning
confidence: 99%