Abstract. We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles's seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles's MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOL −2 ). Moreover, in MIMC, the rate of increase of required memory with respect to TOL is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC.We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree (TD) type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets. We also show the asymptotic normality of the statistical error in the resulting MIMC estimator and justify in this way our error estimate, which allows both the required accuracy and the confidence level in our computational results to be prescribed. Finally, we include numerical experiments involving a partial differential equation posed in three spatial dimensions and with random coefficients to substantiate the analysis and illustrate the corresponding computational savings of MIMC.
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients.
In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the MultiIndex and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.
We investigate the problem of computing a nested expectation of the formwhere H is the Heaviside function. This nested expectation appears, for example, when estimating the probability of a large loss from a financial portfolio. We present a method that combines the idea of using Multilevel Monte Carlo (MLMC) for nested expectations with the idea of adaptively selecting the number of samples in the approximation of the inner expectation, as proposed by (Broadie et al., 2011). We propose and analyse an algorithm that adaptively selects the number of inner samples on each MLMC level and prove that the resulting MLMC method with adaptive sampling has an O(ε −2 |log ε| 2 ) complexity to achieve a root mean-squared error ε. The theoretical analysis is verified by numerical experiments on a simple model problem. We also present a stochastic root-finding algorithm that, combined with our adaptive methods, can be used to compute other risk measures such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), with the latter being achieved with O(ε −2 ) complexity.
We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. We optimize hierarchies with geometric and non-geometric sequences of mesh sizes and show that geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity as non-geometric optimal hierarchies. We discuss how enforcing constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. These constraints include an upper and a lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm [11]. The first example considers a three-dimensional elliptic partial differential equation with random inputs. Its space discretization is based on continuous piecewise trilinear finite elements and the corresponding linear system is solved by either a direct or an iterative solver. The second example considers a onedimensional Itô stochastic differential equation discretized by a Milstein scheme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.