We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.
Variance-based global sensitivity analysis decomposes the variance of the output of a computer model, resulting from uncertainty about the model's inputs, into variance components associated with each input's contribution. The two most common variance-based sensitivity measures, the first-order effects and the total effects, may fail to sum to the total variance. They are often used together in sensitivity analysis, because neither of them adequately deals with interactions in the way the inputs affect the output. Therefore Owen proposed an alternative sensitivity measure, based on the concept of the Shapley value in game theory, and showed it always sums to the correct total variance if inputs are independent. We analyze Owen's measure, which we call the Shapley effect, in the case of dependent inputs. We show empirically how the first-order and total effects, even when used together, may fail to appropriately measure how sensitive the output is to uncertainty in the inputs when there is probabilistic dependence or structural interaction among the inputs. Because they involve all subsets of the inputs, Shapley effects could be expensive to compute if the number of inputs is large. We propose a Monte Carlo algorithm that makes accurate approximation of Shapley effects computationally affordable, and we discuss efficient allocation of the computation budget in this algorithm.
We present procedures for selecting the best or near-best of a finite number of simulated systems when best is defined by maximum or minimum expected performance. The procedures are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system. The goal of such a sequential procedure is to eliminate, at an early stage of experimentation, those simulated systems that are apparently inferior, and thereby reduce the overall computational effort required to find the best. The procedures we present accommodate unequal variances across systems and the use of common random numbers. However, they are based on the assumption of normally distributed data, so we analyze the impact of batching (to achieve approximate normality or independence) on the performance of the procedures. Comparisons with some existing indifference-zone procedures are also provided.
We propose an optimization-via-simulation algorithm, called COMPASS, for use when the performance measure is estimated via a stochastic, discrete-event simulation, and the decision variables are integer ordered. We prove that COMPASS converges to the set of local optimal solutions with probability 1 for both terminating and steady-state simulation, and for both fully constrained problems and partially constrained or unconstrained problems under mild conditions.
We describe the basic principles of ranking and selection, a collection of experimentdesign techniques for comparing "populations" with the goal of finding the best among them. We then describe the challenges and opportunities encountered in adapting ranking-and-selection techniques to stochastic simulation problems, along with key theorems, results and analysis tools that have proven useful in extending them to this setting. Some specific procedures are presented along with a numerical illustration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.