2022
DOI: 10.1137/20m1387158
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Algorithms for Tensor Train Arithmetic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…In the tensor literature, there are several parallel and distributed systems for processing large-scale tensors. We can list here some efficient tools for: (a) distributed CP decomposition (e.g., DFacTo [132], SPLATT [133]), (b) distributed Tucker decomposition (e.g., DHOSVD [85], SGD-Tucker [134]), and (c) distributed TT decomposition (e.g., ADTT [110], ATTAC [135]), etc. These tools mainly distribute the unfolding matrices or sub-tensors among several clusters and integrate their low-rank tensor approximations to find the overall low-rank approximation of the underlying tensor.…”
Section: Efficient and Scalable Tensor Trackingmentioning
confidence: 99%
“…In the tensor literature, there are several parallel and distributed systems for processing large-scale tensors. We can list here some efficient tools for: (a) distributed CP decomposition (e.g., DFacTo [132], SPLATT [133]), (b) distributed Tucker decomposition (e.g., DHOSVD [85], SGD-Tucker [134]), and (c) distributed TT decomposition (e.g., ADTT [110], ATTAC [135]), etc. These tools mainly distribute the unfolding matrices or sub-tensors among several clusters and integrate their low-rank tensor approximations to find the overall low-rank approximation of the underlying tensor.…”
Section: Efficient and Scalable Tensor Trackingmentioning
confidence: 99%
“…When we consider u TT (Ax) = u TT (y) in coordinates y, equation (12) is the FTT expansion for u TT (y). However, when we consider v(x) = u TT (Ax) in coordinates x we have that each mode ψ i (α i−1 ; a i • x; α i ) in the tensor ridge function (12) is no longer a univariate function of x i as in (8), but rather a d-variate ridge function, which, has the property of being constant in all directions orthogonal to the vector a i (e.g., [7,29]). An important problem is determining the FTT expansion…”
Section: Tensor Ridge Functionsmentioning
confidence: 99%
“…The final estimate for computing the matrix D i is O(d 2 n 3 r 3 ), which dominates the cost of performing one-step of (49). We point out that the computation of D i may be incorporated into high performance computing algorithms for tensor train rounding, e.g., [8,36].…”
Section: Computational Costmentioning
confidence: 99%
“…In the tensor literature, there are several parallel and distributed systems for processing large-scale tensors. We can list here some efficient tools for: (a) distributed CP decomposition (e.g., DFacTo [176], SPLATT [177]), (b) distributed Tucker decomposition (e.g., DHOSVD [88], SGD-Tucker [178]), and (c) distributed TT decomposition (e.g., ADTT [114], ATTAC [179]), etc. These tools mainly distribute the unfolding matrices or sub-tensors among several clusters and integrate their low-rank tensor approximations to find the overall low-rank approximation of the underlying tensor.…”
Section: Efficient and Scalable Tensor Trackingmentioning
confidence: 99%