2021
DOI: 10.1007/s11222-021-10024-8
|View full text |Cite
|
Sign up to set email alerts
|

A fast and calibrated computer model emulator: an empirical Bayes approach

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 57 publications
0
2
0
Order By: Relevance
“…Instead of coupling the data D s and D p to infer the posterior, they suggested first training the emulator, b f x, θ ð Þ, using the predictive mean of the GP posterior based on the simulation data D s , and then inferring the posterior as done previously by replacing f with b f . On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including θ, σ 2 ε , γ, ω, μ δ , and μ f , the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (6), that is,…”
Section: Expensive Computer Simulationmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead of coupling the data D s and D p to infer the posterior, they suggested first training the emulator, b f x, θ ð Þ, using the predictive mean of the GP posterior based on the simulation data D s , and then inferring the posterior as done previously by replacing f with b f . On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including θ, σ 2 ε , γ, ω, μ δ , and μ f , the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (6), that is,…”
Section: Expensive Computer Simulationmentioning
confidence: 99%
“…Instead of coupling the data Ds$$ {\mathcal{D}}^s $$ and Dp$$ {\mathcal{D}}^p $$ to infer the posterior, they suggested first training the emulator, truef̂(),xθ$$ \hat{f}\left(\mathbf{x},\boldsymbol{\theta} \right) $$, using the predictive mean of the GP posterior based on the simulation data Ds$$ {\mathcal{D}}^s $$, and then inferring the posterior as done previously by replacing f$$ f $$ with truef̂$$ \hat{f} $$. On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including bold-italicθ$$ \boldsymbol{\theta} $$, σε2$$ {\sigma}_{\varepsilon}^2 $$, bold-italicγ$$ \boldsymbol{\gamma} $$, bold-italicω$$ \boldsymbol{\omega} $$, μδ$$ {\mu}_{\delta } $$, and μf$$ {\mu}_f $$, the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (), that is, τδnormalΦδbold-italicγ+σ<...…”
Section: Posterior Inferencementioning
confidence: 99%