2019
DOI: 10.48550/arxiv.1904.09929
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unbiased Multilevel Monte Carlo: Stochastic Optimization, Steady-state Simulation, Quantiles, and Other Applications

Abstract: We present general principles for the design and analysis of unbiased Monte Carlo estimators in a wide range of settings. Our estimators posses finite work-normalized variance under mild regularity conditions. We apply our estimators to various settings of interest, including unbiased optimization in Sample Average Approximations, unbiased steady-state simulation of regenerative processes, quantile estimation and nested simulation problems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 11 publications
0
13
0
Order By: Relevance
“…In addition to our theoretical result, we also suggested a new stochastic optimization implementation of dropout training. We borrowed ideas from the Multi-level Monte Carlo literature-in particular from the work of (Blanchet et al, 2019a)-to suggest an unbiased dropout training routine that is easily parallelizable and that has a much smaller computational cost compared to naive dropout training methods when the number of features is large (Theorem 2). Crucially, we showed that under some regularity conditions our estimator has finite variance (which means there are also theoretical, and not just practical, gains from parallelization).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to our theoretical result, we also suggested a new stochastic optimization implementation of dropout training. We borrowed ideas from the Multi-level Monte Carlo literature-in particular from the work of (Blanchet et al, 2019a)-to suggest an unbiased dropout training routine that is easily parallelizable and that has a much smaller computational cost compared to naive dropout training methods when the number of features is large (Theorem 2). Crucially, we showed that under some regularity conditions our estimator has finite variance (which means there are also theoretical, and not just practical, gains from parallelization).…”
Section: Discussionmentioning
confidence: 99%
“…To address these two issues, we apply some recent techniques suggested in Blanchet et al (2019a) that we refer to as Unbiased Multi-level Monte Carlo Approximations. 13 Before providing a detailed presentation of the algorithm, we provide a heuristic description.…”
Section: Unbiased Multi-level Monte Carlo Approximation For Dropout T...mentioning
confidence: 99%
See 1 more Smart Citation
“…Among several applications, they propose [8, Section 5.2] an estimator for argmin x E S∼P f (x; S) where f (•; s) is convex for all s and assuming access to minimizers of empirical objectives of the form i∈[N ] f (x; s i ). The authors provide a preliminary analysis of the estimator's variance (later elaborated in [9]) using an asymptotic Taylor expansion around the population minimizer. In comparison, we study the more general setting of stochastic gradient estimators and provide a complete algorithm based on SGD, along with a non-asymptotic analysis and concrete settings where our estimator is beneficial.…”
Section: Related Workmentioning
confidence: 99%
“…Jacob and Thiery (2015) study the existence of unbiased nonnegative estimators. RMLMC and related methods have been used in a variety of contexts such as the unbiased estimation of a function of the mean of a random variable (Blanchet, Chen andGlynn 2015, Moka, Kroese andJuneja 2019), the design of Markov chain Monte Carlo methods (Bardenet, Doucet and Holmes 2017, Agapiou, Roberts and Vollmer 2018, Middleton, Deligiannidis, Doucet and Jacob 2018, unbiased inference for hidden Markov models (Franks, Jasra, Law and Vihola 2018), pricing of Asian options under general models (Kahalé 2018), and stochastic optimization (Blanchet, Glynn and Pei 2019). Vihola (2018) describes stratified RMLMC methods that, under certain conditions, are shown to be asymptotically as efficient as MLMC.…”
Section: Introductionmentioning
confidence: 99%