Adaptive Monte Carlo variance reduction is an effective framework for running a Monte Carlo simulation along with a parameter search algorithm for variance reduction, whereas an initialization step is required for preparing problem parameters in some instances. In spite of the effectiveness of adaptive variance reduction in various fields of application, the length of the preliminary phase has often been left unspecified for the user to determine on a case-by-case basis, much like in typical sequential frameworks. This uncertain element may possibly be even fatal in realistic finite-budget situations, since the pilot run may take most of the budget, or possibly use up all of it. To unnecessitate such an ad hoc initialization step, we develop a batching procedure in adaptive variance reduction, and provide an implementable formula of the learning rate in the parameter search which minimizes an upper bound of the theoretical variance of the empirical batch mean. We analyze decay rates of the minimized upper bound towards the minimal estimator variance with respect to the predetermined computing budget, and provide convergence results as the computing budget increases progressively when the batch size is fixed. Numerical examples are provided to support theoretical findings and illustrate the effectiveness of the proposed batching procedure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.