Making good predictions of a physical system using a computer code requires the inputs to be carefully specified. Some of these inputs, called control variables, reproduce physical conditions whereas other inputs, called parameters, are specific to the computer code and most often uncertain. The goal of statistical calibration consists in reducing their uncertainty with the help of a statistical model which links the code outputs with the field measurements. In a Bayesian setting, the posterior distribution of these parameters is typically sampled using MCMC methods. However, they are impractical when the code runs are highly timeconsuming. A way to circumvent this issue consists of replacing the computer code with a Gaussian process emulator, then sampling a surrogate posterior distribution based on it. Doing so, calibration is subject to an error which strongly depends on the numerical design of experiments used to fit the emulator. Under the assumption that there is no code discrepancy, we aim to reduce this error by constructing a sequential design by means of the Expected Improvement criterion. Numerical illustrations in several dimensions assess the efficiency of such sequential strategies.
International audienceWe consider a model for the risk-based design of a flood protection dike, and use probability distributions to represent aleatory uncertainty and possibility distributions to describe the epistemic uncertainty associated to the poorly known parameters of such probability distributions. A hybrid method is introduced to hierarchically propagate the two types of uncertainty, and the results are compared with those of a Monte Carlo-based Dempster-Shafer approach employing independent random sets and a purely probabilistic, two-level Monte Carlo approach: the risk estimates produced are similar to those of the Dempster-Shafer method and more conservative than those of the two-level Monte Carlo approach
The parameters of a discrete stationary Markov model are transition probabilities between states. Traditionally, data consist in sequences of observed states for a given number of individuals over the whole observation period. In such a case, the estimation of transition probabilities is straightforwardly made by counting one-step moves from a given state to another. In many real-life problems, however, the inference is much more difficult as state sequences are not fully observed, namely the state of each individual is known only for some given values of the time variable. A review of the problem is given, focusing on Monte Carlo Markov Chain (MCMC) algorithms to perform Bayesian inference and evaluate posterior distributions of the transition probabilities in this missing-data framework. Leaning on the dependence between the rows of the transition matrix, an adaptive MCMC mechanism accelerating the classical Metropolis-Hastings algorithm is then proposed and empirically studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.