2016
DOI: 10.1137/151005841
|View full text |Cite
|
Sign up to set email alerts
|

A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

Abstract: Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Kennedy and O'Hagan [18] suggested an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
99
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(101 citation statements)
references
References 24 publications
2
99
0
Order By: Relevance
“…and · H ν+d/2 (Ω) are equivalent (Wendland, 2004;Tuo and Wu, 2016). Therefore, as a consequence of Lemma A.1, we have the following proposition forξ n obtained by (3).…”
Section: Appendices a Asymptotic Results For Physical Experiments Modementioning
confidence: 58%
See 1 more Smart Citation
“…and · H ν+d/2 (Ω) are equivalent (Wendland, 2004;Tuo and Wu, 2016). Therefore, as a consequence of Lemma A.1, we have the following proposition forξ n obtained by (3).…”
Section: Appendices a Asymptotic Results For Physical Experiments Modementioning
confidence: 58%
“…Although there is a rich literature on calibration, the existing approaches focus mainly on continuous outputs. Inspired by the optimality of the frequentist approach proposed by Tuo and Wu (2016), we develop a calibration framework for binary outputs using the idea of L 2 projection. Ideally, θ can be obtained by minimizing the discrepancy measured by the L 2 distance between the underlying probability functions in the physical and computer experiments.…”
Section: A New Calibration Frameworkmentioning
confidence: 99%
“…This brand of Bayesian calibration is criticized because of the relationship between the discrepancy's prior and the posterior of the parameter because of confounding between the discrepancy and the parameter. Tuo and Wu () formally described this, demonstrating that the posterior mode of the parameter goes to some value that is dependent on the discrepancy's prior in a frequentist setting. Plumlee () included a potential fix but offered only Bayesian support and did not confirm that the problems that were uncovered by Tuo and Wu () are alleviated.…”
Section: Introductionmentioning
confidence: 91%
“…Tuo and Wu () formally described this, demonstrating that the posterior mode of the parameter goes to some value that is dependent on the discrepancy's prior in a frequentist setting. Plumlee () included a potential fix but offered only Bayesian support and did not confirm that the problems that were uncovered by Tuo and Wu () are alleviated. This fix also requires the use of the derivative of the computer model, which can be burdensome in some cases and impossible in others, such as the discrete parameter case of Section 9.2.…”
Section: Introductionmentioning
confidence: 91%
“…In Kennedy and O'Hagan (2001) the authors first studied the computer model calibration problem by assigning a Gaussian process prior to the discrepancy function δ(·), and then obtaining its posterior distribution. Although the Kennedy and O'Hagan (abbreviated as KO) approach did not tackle the identifiability issue directly, Tuo and Wu (2016) discussed its potential in a simplified setting. Specifically, if the discrepancy function follows a meanzero Gaussian process prior with covariance function Ψ, θ follows a uniform prior, and the physical data are noise-free (i.e., e i 's are zeros) in the KO approach, then the posterior density of θ is…”
Section: Introductionmentioning
confidence: 99%