2017
DOI: 10.1007/s11045-017-0481-0
|View full text |Cite
|
Sign up to set email alerts
|

Fundamental tensor operations for large-scale data analysis using tensor network formats

Abstract: We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 54 publications
(59 citation statements)
references
References 43 publications
(138 reference statements)
0
59
0
Order By: Relevance
“…Basic tensor operations are summarized in Table 2.1, and illustrated in Figures 2.1-2.13. We refer to [43,119,128] for more detail regarding the basic notations and tensor operations. For convenience, general operations, such as vec(¨) or diag(¨), are defined similarly to the MATLAB syntax.…”
Section: Basic Multilinear Operationsmentioning
confidence: 99%
See 3 more Smart Citations
“…Basic tensor operations are summarized in Table 2.1, and illustrated in Figures 2.1-2.13. We refer to [43,119,128] for more detail regarding the basic notations and tensor operations. For convenience, general operations, such as vec(¨) or diag(¨), are defined similarly to the MATLAB syntax.…”
Section: Basic Multilinear Operationsmentioning
confidence: 99%
“…. i n,K n ) [128]. Furthermore, the nested (hierarchical) form of such a generalized Tucker decomposition leads to the Tree Tensor Networks State (TTNS) model [149] (see Figure 2.15 and Figure 2.18), with possibly a varying order of cores, which can be formulated as X = G 1 ; B (1) , B (2) , .…”
Section: Cp Decomposition Tucker Decompositionmentioning
confidence: 99%
See 2 more Smart Citations
“…Because of the multidimensional structure of the trial data or controller parameters, respectively, complexity and storage demand can significantly be reduced by tensor structures such as Tensor Trains, Tucker, Hierarchical Tucker, or Canonical Polyadic (CP). 8 For many of these decomposition algorithms, implementations and standard tools are already available. [9][10][11] This paper is organized as follows.…”
Section: Introductionmentioning
confidence: 99%