2016
DOI: 10.1137/15m1036919
|View full text |Cite
|
Sign up to set email alerts
|

Spectral Tensor-Train Decomposition

Abstract: Abstract. The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. W… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
115
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 108 publications
(115 citation statements)
references
References 60 publications
0
115
0
Order By: Relevance
“…The second method for obtaining a tensor is to represent a function in a tensor product basis, i.e., a basis produced by taking a full tensor product of univariate functions [13,31,6]. In particular, let {φ ki k : X k → R : k = 1, .…”
Section: Tensor-train Representationmentioning
confidence: 99%
See 2 more Smart Citations
“…The second method for obtaining a tensor is to represent a function in a tensor product basis, i.e., a basis produced by taking a full tensor product of univariate functions [13,31,6]. In particular, let {φ ki k : X k → R : k = 1, .…”
Section: Tensor-train Representationmentioning
confidence: 99%
“…Parameters are represented by red squares and functions are represented in blue. In contrast to previous approaches [31,6], the parameters and parameterized form that represent each univariate function are stored separately, i.e., no tensor-product structure is imposed upon the parameters of the constituent univariate functions. Instead tensor-product structure is imposed upon the univariate functions themselves.…”
Section: Continuous Analogue Of the Tensor-trainmentioning
confidence: 99%
See 1 more Smart Citation
“…In ongoing work, we will, furthermore, explore different NN architectures such as residual networks, and reinforcement learning which has shown to be well-suited for time dependent data. Alternatively, instead of using NN other new approaches can be utilized to address the curse of dimensionality, such as spectral tensor train decompositions [25].…”
Section: Future Prospectsmentioning
confidence: 99%
“…However, performing such operations with tensors in low-rank format, it typically happens that the representation rank of the tensors grows, which calls for their truncation, i.e. their approximation or reparametrisation with fewer parameters while keeping a reasonable accuracy [37,38,39]. This truncation of tensors may be viewed as a generalisation of the rounding of numbers, which occurs when working with floating point formats.…”
Section: Introductionmentioning
confidence: 99%