2019
DOI: 10.1137/18m1220625
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Marginalization-Based MCMC Methods for Hierarchical Bayesian Inverse Problems

Abstract: Hierarchical models in Bayesian inverse problems are characterized by an assumed prior probability distribution for the unknown state and measurement error precision, and hyperpriors for the prior parameters. Combining these probability models using Bayes' law often yields a posterior distribution that cannot be sampled from directly, even for a linear model with Gaussian measurement error and Gaussian prior, both of which we assume in this paper. In such cases, Gibbs sampling can be used to sample from the po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 48 publications
0
5
0
Order By: Relevance
“…In [23], the authors consider a fully Bayesian approach with assigned hyperpriors and use MCMC methods with Metropolis-Hastings independence sampling with a proposal distribution based on a low-rank approximation of the prior-preconditioned Hessian. In [188], these ideas are combined with marginalization. Theoretical results on approximating the posterior distribution and posterior covariance matrix can be found in [189,194].…”
Section: Connection Between Regularization and Bayesian Inversionmentioning
confidence: 99%
“…In [23], the authors consider a fully Bayesian approach with assigned hyperpriors and use MCMC methods with Metropolis-Hastings independence sampling with a proposal distribution based on a low-rank approximation of the prior-preconditioned Hessian. In [188], these ideas are combined with marginalization. Theoretical results on approximating the posterior distribution and posterior covariance matrix can be found in [189,194].…”
Section: Connection Between Regularization and Bayesian Inversionmentioning
confidence: 99%
“…Significant progress has been made in this area in recent years; the references [1,38,52,71,72,86,100,101] provide a small sample of literature from the last couple of decades. There have also been notable strides in computational methods for infinite-dimensional Bayesian inverse problems; see, e.g., [12,15,31,32,50,62,85,93,98].…”
Section: Introductionmentioning
confidence: 99%
“…For linear inverse problems, [5] investigated the use of Gibbs sampling schemes that alternatively update the model parameters and hyperparameters, [1] analyzed the dimension scalability (w.r.t. the model parameters) of several Gibbs sampling schemes, [22] analyzed the consistency of the hyperparameter estimation, [25,49] investigated the use of the one-block-update of [48] and marginalization over model parameters to accelerate the sampling. The success of these developments commonly relies on two facts: there exists an analytic expression for the marginal posterior over the hyperparameters and one can the directly sample the conditional posterior over the model parameters for given hyperparameters.…”
mentioning
confidence: 99%
“…This way, the RTO-PM is equivalent to a MH method that uses a proposal p RTO (u ♯ |θ ♯ ) q(θ ♯ , θ) to sample the joint posterior p(u, θ|y). Thus, for K = 1, the RTO-PM can also be viewed as the nonlinear extension of the one-block-update (see [25,48,49] for example) used in linear inverse problems. In the linear case, the conditional posterior p(u|y, θ) is Gaussian and can be directly sampled, whereas here we sample from p(u|y, θ) using an MH step with RTO proposal.…”
mentioning
confidence: 99%
See 1 more Smart Citation