2013
DOI: 10.1016/j.cma.2013.02.017
|View full text |Cite
|
Sign up to set email alerts
|

Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
180
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 124 publications
(182 citation statements)
references
References 23 publications
2
180
0
Order By: Relevance
“…Many algorithms have been proposed to perform this optimization in general non-linear cases (see e.g. [20][21][22]). Huan and Marzouk [22], for example, rewrite the expected relative entropy for a joint analysis of D 1 and D 2 as:…”
Section: B Surprisementioning
confidence: 99%
“…Many algorithms have been proposed to perform this optimization in general non-linear cases (see e.g. [20][21][22]). Huan and Marzouk [22], for example, rewrite the expected relative entropy for a joint analysis of D 1 and D 2 as:…”
Section: B Surprisementioning
confidence: 99%
“…It is likely that the issues with importance sampling will be exacerbated in higher dimensional problems, and so, importance sampling from the Laplace approximation may be more useful in this setting. Polynomial-based sparse quadrature methods [36] may also be useful in higher dimensional model settings to perform the integration over θ in Equation (1).…”
Section: Discussionmentioning
confidence: 99%
“…Laplace approximations also suffer from the curse of dimensionality. To overcome this issue, Long et al [36] used polynomial-based sparse quadrature to integrate over the prior distribution. Furthermore, the estimated posterior distribution obtained from the Laplace approximation may not accommodate the tails of the posterior distribution, and so, we investigate alternative methods to obtain better coverage of the tails.…”
Section: Laplace Approximationmentioning
confidence: 99%
“…The main results obtained in are as follows.Theorem Under the assumption that the smallest singular value of the Jacobian matrix is bounded from zero by a small constant and the model output g ( θ , ξ ) has continuous second derivatives, the expected information gain can be approximated by I(bold-italicξ)=bold-italicΘ[]12log(|bold-italicΣ(bold-italicθ0,bold-italicξ)|)0.3em0.3emtr(bold-italicΣ(bold-italicθ0,bold-italicξ)bold-italicHh(bold-italicθ0))2d2d2log(2π)p(bold-italicθ0)dbold-italicθ0+scriptO()1s0.3em, where H h is the Hessian of h(bold-italicθ)=log[]p(bold-italicθ).Theorem Under the assumption that r < d and that the model output g ( θ , ξ ) has continuous second derivatives, the expected information gain can be approximated by I(bold-italicξ)=bold-italicΘ0.3em0.3em[]12log|bold-italicΣproj(bold-italicθ0,bold-italicξ)|0.3emlog0.3em[]Tpbold-italics,bold-italict(bold-italic0,bold-italict)dbold-italict0.3emr2r2log(2π)p(bold-italicθ0)dbold-italicθ…”
Section: Expected Information Gain In a Proposed Experimentsmentioning
confidence: 99%
“…The following theorem holds. Theorem Under the assumption that the smallest singular value of the Jacobian matrix is bounded from zero by a small constant and the model output g ( θ , ξ ) has continuous second derivatives, the expected information gain can be approximated by I(bold-italicξ)=bold-italicΘtrueD~KL(bold-italicθ0,bold-italicξ)p(bold-italicθ0)dbold-italicθ0+scriptO()1s0.3em, with D~KL(θ0,ξ)=normalΘϕ(θ|y)p(θ)ϕ(θ|y)dθandϕ(θ|y)=p~(θ|y,ξ)normalΘp~(θ|y,ξ)dθ. Proof The proof is very similar to that of Theorem in , and only an outline is provided here. Let ϕ(θ)=p(θ̂|y)exp(θθ̂)normalΣ(…”
Section: Expected Information Gain In a Proposed Experimentsmentioning
confidence: 99%