2015
DOI: 10.1063/1.4912309
|View full text |Cite
|
Sign up to set email alerts
|

A multilevel stochastic collocation method for SPDEs

Abstract: Abstract. We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…In particular, -the third column stores the results in Gunzburger and Teckentrup [18], that uses an approach very similar to ours, with a different evaluation of the Lebesgue constant. The pointsets were available at [19] and tested. Some points were outside the domain by a very little quantity.…”
Section: The Lebesgue Function and Its Computationmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, -the third column stores the results in Gunzburger and Teckentrup [18], that uses an approach very similar to ours, with a different evaluation of the Lebesgue constant. The pointsets were available at [19] and tested. Some points were outside the domain by a very little quantity.…”
Section: The Lebesgue Function and Its Computationmentioning
confidence: 99%
“…In [18], a similar strategy has been suggested, by means of the Matlab built-in routine fminimax, improving some results in [11] for d ≤ 10, also performing experiments on the cube and on the 3D ball. These pointsets are available at [19]. In [27] a very different approach has been used via two greedy algorithms, the first one for determining a good initial pointset, the second one for refining the results via a leaving-out one point technique.…”
mentioning
confidence: 99%