2020
DOI: 10.1002/nla.2325
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Krylov subspace methods for uncertainty quantification in large Bayesian linear inverse problems

Abstract: Summary Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems) and for problems where computation of the square root and inverse of the prior covariance matrix are not feasible. This work exploits Krylov subspace methods to develop and analyze new techniques for large‐scale uncertainty quantification in inverse problems. In this work, we assume that generalized Golub‐Kahan‐based m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 37 publications
0
22
0
Order By: Relevance
“…Since the posterior density function is Gaussian, variance estimates for the solution can be obtained by computing the diagonal entries of Γ post . Furthermore, samples from the posterior can be obtained using efficient Krylov subspace methods [ 13 , 14 ].…”
Section: Methodsmentioning
confidence: 99%
“…Since the posterior density function is Gaussian, variance estimates for the solution can be obtained by computing the diagonal entries of Γ post . Furthermore, samples from the posterior can be obtained using efficient Krylov subspace methods [ 13 , 14 ].…”
Section: Methodsmentioning
confidence: 99%
“…However, neither computing nor storing the covariance matrix is feasible, making further uncertainty estimation challenging. Instead, we follow the approach described in (Chung et al, 2018;Saibaba et al, 2020) for the fixed mean case, where an approximation to the posterior covariance matrix is obtained using the computed vectors generated using the generalized Golub-Kahan bidiagonalization process. An advantage of this approach is that, by storing partial information while computing the MAP estimate, we can approximately compute the uncertainty associated with the MAP estimate (e.g., posterior variance) with minimal additional cost, and no further accesses to the forward and adjoint models.…”
Section: Approximation To the Posterior Covariance Matricesmentioning
confidence: 99%
“…More precisely, in addition to storing the information required for storing Q, we only need to store nk +k additional entries corresponding to the matrices Z K k and ∆ K k . Furthermore, the error in the low-rank approximation can be analyzed using similar techniques as in (Saibaba et al, 2020). Similar to the approach described in Section 3.2, the posterior variance, which corresponds to the diagonal entries of Γ post , can be approximated using the diagonal entries of (B6).…”
Section: B3 Approximation To the Posterior Variancementioning
confidence: 99%
See 2 more Smart Citations