2018
DOI: 10.1515/cmam-2018-0028
|View full text |Cite
|
Sign up to set email alerts
|

Non-intrusive Tensor Reconstruction for High-Dimensional Random PDEs

Abstract: This paper examines a completely non-intrusive, sample-based method for the computation of functional low-rank solutions of high-dimensional parametric random PDEs, which have become an area of intensive research in Uncertainty Quantification (UQ). In order to obtain a generalized polynomial chaos representation of the approximate stochastic solution, a novel black-box rank-adapted tensor reconstruction procedure is proposed. The performance of the described approach is illustrated with several numerical examp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 18 publications
(19 citation statements)
references
References 76 publications
0
19
0
Order By: Relevance
“…Thus, the central notion of this scheme is to learn the solution operator y → a( • , y) → u( • , y) from generated data u( • , y i ). The computation of the data u( • , y i ) is completely non-intrusive, meaning that standard FE solvers can be employed to compute u( • , y i ) and perform the optimization of the mean squared errors in a standard tensor recovery algorithm as described in [50].…”
Section: Parametric Pdes As a Common Benchmark Example We Introducementioning
confidence: 99%
See 1 more Smart Citation
“…Thus, the central notion of this scheme is to learn the solution operator y → a( • , y) → u( • , y) from generated data u( • , y i ). The computation of the data u( • , y i ) is completely non-intrusive, meaning that standard FE solvers can be employed to compute u( • , y i ) and perform the optimization of the mean squared errors in a standard tensor recovery algorithm as described in [50].…”
Section: Parametric Pdes As a Common Benchmark Example We Introducementioning
confidence: 99%
“…In fact, this allows for the computation of the H 1 0 (D) norm of functions in the FE space more efficiently and provides numerical stability to the minimization problem (5). Note also that one can use a standard tensor reconstruction algorithm as in [50] since u H 1 0 (D) = u 2 . However, the resulting tensor represents the solution w.r.t.…”
Section: Parametric Pdes As a Common Benchmark Example We Introducementioning
confidence: 99%
“…In spite of this success, we have to point out that optimizing over S d g,ρ took about 10 times longer than optimizing over S d,aug g,ρ . Finally, we observe that the recovery in T 14 (V 10 6 ) produces unexpectedly large relative errors when compared to previous results in [13]. This suggests that the rank-adaptive algorithm from [13] has a strong regularizing effect that improves the sample efficiency.…”
Section: Quantities Of Interestmentioning
confidence: 54%
“…In the context of optimal control tensor train networks have been utilized for solving the Hamilton-Jacobi-Bellman equation in [8,9], for solving backward stochastic differential equations in [10] and for the calculation of stock options prices in [11,12]. In the context of uncertainty quantification they are used in [13][14][15] and in the context of image classification they are used in [16,17].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation