2016
DOI: 10.1016/j.jcp.2015.10.008
|View full text |Cite
|
Sign up to set email alerts
|

Dimension-independent likelihood-informed MCMC

Abstract: Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on functio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
257
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 185 publications
(259 citation statements)
references
References 44 publications
(152 reference statements)
2
257
0
Order By: Relevance
“…This perspective links the marginal sampler to some recent work in the inverse problem literature that improves on pCN, such as Law () and Cui et al . (), which we review below.…”
Section: Gradient鈥恇ased Samplers For Latent Gaussian Modelsmentioning
confidence: 99%
“…This perspective links the marginal sampler to some recent work in the inverse problem literature that improves on pCN, such as Law () and Cui et al . (), which we review below.…”
Section: Gradient鈥恇ased Samplers For Latent Gaussian Modelsmentioning
confidence: 99%
“…An important task in practical MCMC is identifying good proposal distributions that can minimize autocorrelation. As the forward model used in our case studies does not have adjoint capabilities, advanced MCMC proposals, such as the stochastic Newton and the likelihood鈥恑nformed dimension鈥恑ndependent proposal that rely on the derivatives of the posterior, cannot be used here. We consider the Gaussian random鈥恮alk proposal q(x,)=N(x,2饾毢), where 危 is some covariance matrix and 蟽 is a scalar that dictates the jump size.…”
Section: Bayesian Formulation and Posterior Explorationmentioning
confidence: 99%
“…Put in another way, the conditioning lemma is exact for the pushforward of the approximate map, 蟺 = T 畏. Nevertheless, if the conditional density 蟺 螛|D=d can be evaluated up to a normalizing constant, then one can quantify the error in the approximation of the conditional using (24). Under these conditions, if one is not satisfied with this error, then any of the approximate maps T d constructed here could be useful as a proposal mechanism for importance sampling or MCMC, to generate (asymptotically) exact samples from the conditional of interest.…”
Section: Inverse Transport: Map From Samplesmentioning
confidence: 99%
“…Accordingly, enormous efforts have been devoted to the design of improved MCMC and SMC samplers-schemes that generate more nearly independent or unweighted samples. While these efforts are too diverse to summarize easily, they often rest on the design of improved (and structure-exploiting) proposal mechanisms within the algorithms [37,3,26,22,62,53,34,24].…”
Section: Introductionmentioning
confidence: 99%