2017
DOI: 10.1137/16m1075582
|View full text |Cite
|
Sign up to set email alerts
|

A Preconditioned Low-Rank Projection Method with a Rank-Reduction Scheme for Stochastic Partial Differential Equations

Abstract: In this study, we consider the numerical solution of large systems of linear equations obtained from the stochastic Galerkin formulation of stochastic partial differential equations. We propose an iterative algorithm that exploits the Kronecker product structure of the linear systems. The proposed algorithm efficiently approximates the solutions in low-rank tensor format. Using standard Krylov subspace methods for the data in tensor format is computationally prohibitive due to the rapid growth of tensor ranks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
33
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 25 publications
(33 citation statements)
references
References 26 publications
(39 reference statements)
0
33
0
Order By: Relevance
“…This leads to linear systems in Kronecker formulation [10,41,43]. We note that for stochastic Galerkin linear systems, various efficient iterative solvers such as mean-based preconditioning methods [41,43], hierarchical preconditioners [47,48], preconditioned low-rank projection methods [36] are vigorously studied. Nevertheless, to the best of our knowledge, in the case of stochastic Helmholtz problems these methods have not been analysed.…”
Section: Introductionmentioning
confidence: 99%
“…This leads to linear systems in Kronecker formulation [10,41,43]. We note that for stochastic Galerkin linear systems, various efficient iterative solvers such as mean-based preconditioning methods [41,43], hierarchical preconditioners [47,48], preconditioned low-rank projection methods [36] are vigorously studied. Nevertheless, to the best of our knowledge, in the case of stochastic Helmholtz problems these methods have not been analysed.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, there is little work on merging multilevel and tensor approximation techniques in uncertainty quantification. Recently, Lee and Elman [35] proposed a two-level scheme in the context of the Galerkin method for PDEs with random data. This scheme uses the solution from the coarse level to identify a dominant subspace in the domain of the random parameter, which in turn is used to speed up the solution on the fine level by avoiding costly low-rank truncations.…”
Section: Introductionmentioning
confidence: 99%
“…Combined with rank compression techniques, the iterates are forced to stay in low-rank format. This idea has been used with Krylov subspace methods [2,5,20,23] and multigrid methods [9,15]. The low-rank solution methods solve the linear systems to a certain accuracy with much less computational effort and facilitate the treatment of larger problem scales.…”
mentioning
confidence: 99%