2014
DOI: 10.1088/0266-5611/30/11/114015
|View full text |Cite
|
Sign up to set email alerts
|

Likelihood-informed dimension reduction for nonlinear inverse problems

Abstract: The intrinsic dimensionality of an inverse problem is affected by prior information, the accuracy and number of observations, and the smoothing properties of the forward operator. From a Bayesian perspective, changes from the prior to the posterior may, in many problems, be confined to a relatively lowdimensional subspace of the parameter space. We present a dimension reduction approach that defines and identifies such a subspace, called the "likelihoodinformed subspace" (LIS), by characterizing the relative i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
224
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 137 publications
(228 citation statements)
references
References 58 publications
0
224
0
Order By: Relevance
“…The eigenvectors v i can be interpreted as the most constrained modes in flux space, i.e., flux patterns that are independently constrained by the observations (Cui et al, 2014;Bousserez and Henze, 2017). These eigenvectors of the prior-preconditioned Hessian are efficiently calculated using a fully parallelized randomized algorithm (Halko et al, 2011), as in Bui-Thanh et al (2012) and Bousserez and Henze (2017).…”
Section: Svd-based Inversionmentioning
confidence: 99%
“…The eigenvectors v i can be interpreted as the most constrained modes in flux space, i.e., flux patterns that are independently constrained by the observations (Cui et al, 2014;Bousserez and Henze, 2017). These eigenvectors of the prior-preconditioned Hessian are efficiently calculated using a fully parallelized randomized algorithm (Halko et al, 2011), as in Bui-Thanh et al (2012) and Bousserez and Henze (2017).…”
Section: Svd-based Inversionmentioning
confidence: 99%
“…In the light of (14), we see that in the partially-informed setup considered in this paper, the optimal reduction performance is lower bounded by κ i (M) and upper bounded by κ i (M post ). The gap between κ i (M) and κ i (M post ) "materializes" the loss of reducibility which can occur by working in a partially-informed setting rather than a perfectly-informed one.…”
Section: Worst-case Optimal Model Reductionmentioning
confidence: 81%
“…induced by the optimal approximation subspace S post i is bounded by the Kolmogorov i-width κ i (M post ), see (14). The upper bound in Theorem 1 thus also defines an upper limit on the projection error made by reducing the true, unknown, manifold M in the worst-case optimal subspace S post i .…”
mentioning
confidence: 96%
“…A further source of low dimensionality in transports is low-rank structure, i.e., situations where a map departs from the identity only on a low-dimensional subspace of the input space [78]. This situation is fairly common in large-scale Bayesian inverse problems where the data are informative, relative to the prior, only about a handful of directions in the parameter space [25,79].…”
Section: Discussionmentioning
confidence: 99%
“…Thus, if we want to evaluate the direct transport at a particular x * ∈ R n , i.e., z * := T (x * ), then by (25) we can simply invert S at x * to obtain z * . In particular, if x * = (x * 1 , .…”
Section: Computing the Inverse Mapmentioning
confidence: 99%