We present a unified view of likelihood based Gaussian progress regression for simulation experiments exhibiting input-dependent noise. Replication plays an important role in that context, however previous methods leveraging replicates have either ignored the computational savings that come from such design, or have short-cut full likelihood-based inference to remain tractable. Starting with homoskedastic processes, we show how multiple applications of a well-known Woodbury identity facilitate inference for all parameters under the likelihood (without approximation), bypassing the typical full-data sized calculations. We then borrow a latent-variable idea from machine learning to address heteroskedasticity, adapting it to work within the same thrifty inferential framework, thereby simultaneously leveraging the computational and statistical efficiency of designs with replication. The result is an inferential scheme that can be characterized as single objective function, complete with closed form derivatives, for rapid library-based optimization. Illustrations are provided, including real-world simulation experiments from manufacturing and the management of epidemics. 1 arXiv:1611.05902v2 [stat.ME] 13 Nov 2017 stochasticity. Whereas in the physical sciences solvers are often deterministic, or if they involve Monte Carlo then the rate of convergence is often known (Picheny and Ginsbourger, 2013), in the social and biological sciences simulations tend to involve randomly interacting agents. In that setting, signal-to-noise ratios can vary dramatically across experiments and for configuration (or input) spaces within experiments. We are motivated by two examples, from inventory control (Hong and Nelson, 2006;Xie et al., 2012) and online management of emerging epidemics (Hu and Ludkovski, 2017), which exhibit both features.Modeling methodology for large simulation efforts with intrinsic stochasticity is lagging.One attractive design tool is replication, i.e., repeated observations at identical inputs. Replication offers a glimpse at pure simulation variance, which is valuable for detecting a weak signal in high noise settings. Replication also holds the potential for computational savings through pre-averaging of repeated observations. It becomes doubly essential when the noise level varies in the input space. Although there are many ways to embellish the classical GP setup for heteroskedastic modeling, e.g., through choices of the covariance kernel, few acknowledge computational considerations. In fact, many exacerbate the problem. A notable exception is stochastic kriging (SK, Ankenman et al., 2010) which leverages replication for thriftier computation in low signal-to-noise regimes, where it is crucial to distinguish intrinsic stochasticity from extrinsic model uncertainty. However, SK has several drawbacks. Inference for unknowns is not based completely on the likelihood. It has the crutch of requiring (a minimal amount of) replication at each design site, which limits its application. Finally, the modeling and ext...
We consider the valuation of energy storage facilities within the framework of stochastic control. Our two main examples are natural gas dome storage and hydroelectric pumped storage. Focusing on the timing flexibility aspect of the problem we construct an optimal switching model with inventory. Thus, the manager has a constrained compound American option on the inter-temporal spread of the commodity prices.Extending the methodology from Carmona and Ludkovski (2008), we then construct a robust numerical scheme based on Monte Carlo regressions. Our simulation method can handle a generic Markovian price model and easily incorporates many operational features and constraints. To overcome the main challenge of the path-dependent storage levels two numerical approaches are proposed. The resulting scheme is compared to the traditional quasi-variational framework and illustrated with several concrete examples. We also consider related problems of interest, such as supply guarantees and mines management.Key words : gas storage; optimal switching; least squares Monte Carlo; hydro pumped storage; impulse control, commodity derivatives AcknowledgmentsWe thank the participants of the Banff BIRS Workshop 07w-5502 "Mathematics and the Environment" and Zhenwei J. Qin for many useful comments and discussions. We also thank the anonymous referees whose feedback led to much improved presentation.
We study the financial engineering aspects of operational flexibility of energy assets. The current practice relies on a representation that uses strips of European spark-spread options, ignoring the operational constraints. Instead, we propose a new approach based on a stochastic impulse control framework. The model reduces to a cascade of optimal stopping problems and directly demonstrates that the optimal dispatch policies can be described with the aid of 'switching boundaries', similar to the free boundaries of standard American options. Our main contribution is a new method of numerical solution relying on Monte Carlo regressions. The scheme uses dynamic programming to efficiently approximate the optimal dispatch policy along the simulated paths. Convergence analysis is carried out and results are illustrated with a variety of concrete examples. We benchmark and compare our scheme to alternative numerical methods.
We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.
We consider a framework for solving optimal liquidation problems in limit order books. In particular, order arrivals are modeled as a point process whose intensity depends on the liquidation price. We set up a stochastic control problem in which the goal is to maximize the expected revenue from liquidating the entire position held. We solve this optimal liquidation problem for power‐law and exponential‐decay order book models explicitly and discuss several extensions. We also consider the continuous selling (or fluid) limit when the trading units are ever smaller and the intensity is ever larger. This limit provides an analytical approximation to the value function and the optimal solution. Using techniques from viscosity solutions we show that the discrete state problem and its optimal solution converge to the corresponding quantities in the continuous selling limit uniformly on compacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.