2021
DOI: 10.48550/arxiv.2112.01474
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Approximation by tree tensor networks in high dimensions: Sobolev and compositional functions

Abstract: This paper is concerned with convergence estimates for fully discrete tree tensor network approximations of high-dimensional functions from several model classes. For functions having standard or mixed Sobolev regularity, new estimates generalizing and refining known results are obtained, based on notions of linear widths of multivariate functions. In the main results of this paper, such techniques are applied to classes of functions with compositional structure, which are known to be particularly suitable for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…We basically proceed as in [14] and successively apply the singular value decomposition as studied in the previous section. Related results can be found in [1,3,28].…”
Section: Tensor Train Formatmentioning
confidence: 92%
See 1 more Smart Citation
“…We basically proceed as in [14] and successively apply the singular value decomposition as studied in the previous section. Related results can be found in [1,3,28].…”
Section: Tensor Train Formatmentioning
confidence: 92%
“…Such tensor representations are known in computational quantum physics and quantum information theory as tensor networks or more precisely, as tree-based tensor networks [1]. All these names are used in the literature [3]. The present investigations can be extended to the hierarchical Tucker format by using the same or similar ideas, cf.…”
Section: Introductionmentioning
confidence: 99%
“…Opposite to the GAN formulation, we do not use the dual characterization and also do not require a discriminator approximation or Lipschitz constraints. The used explicit polynomial chaos model class corresponds to the generator only within the GAN min-max formulation (5). This leads to a single model that needs to be trained.…”
Section: Relation To Generative Adversarial Neural Network (Gan)mentioning
confidence: 99%
“…To get a bigger picture, it should be noted that the graph structure of tensor formats in some way resembles the topology (and expressiveness) of neural networks (NN) [18,19,1,2,5]. In fact, tensor networks can be seen as a subset of NNs with somewhat lesser expressivity 2 on the one hand, but much richer mathematical structure on the other.…”
mentioning
confidence: 99%
“…Such tensor representations are known in computational quantum physics and quantum information theory as tensor networks or more precisely, as tree-based tensor networks [1]. All these names are used in the literature [3]. The present investigations can be extended to the hierarchical Tucker format by using the same or similar ideas, cf.…”
Section: Introductionmentioning
confidence: 99%