2018
DOI: 10.1214/17-aap1326
|View full text |Cite
|
Sign up to set email alerts
|

The sample size required in importance sampling

Abstract: The goal of importance sampling is to estimate the expected value of a given function with respect to a probability measure ν using a random sample of size n drawn from a different probability measure µ. If the two measures µ and ν are nearly singular with respect to each other, which is often the case in practice, the sample size required for accurate estimation is large. In this article it is shown that in a fairly general setting, a sample of size approximately exp(D(ν||µ)) is necessary and sufficient for a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
103
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 115 publications
(106 citation statements)
references
References 61 publications
3
103
0
Order By: Relevance
“…See Theorems 3.2, 3.4 and 3.5 below for the precise values of µ and σ in these cases. Together with results in [6] (see (2.2) below) we find that about N * = e µn+σ √ n /F n,1 samples are sufficient to well approximate F n,1 . Thus, for instance, only about 194, 1520 and 75 samples are required for F 200,1 ≈ 4.5397 × 10 41 by the importance sampling algorithms A r , A f and A g .…”
Section: Resultssupporting
confidence: 77%
See 1 more Smart Citation
“…See Theorems 3.2, 3.4 and 3.5 below for the precise values of µ and σ in these cases. Together with results in [6] (see (2.2) below) we find that about N * = e µn+σ √ n /F n,1 samples are sufficient to well approximate F n,1 . Thus, for instance, only about 194, 1520 and 75 samples are required for F 200,1 ≈ 4.5397 × 10 41 by the importance sampling algorithms A r , A f and A g .…”
Section: Resultssupporting
confidence: 77%
“…Examples in [8] show that (2.1) and (2.2) can lead to very different estimates of the required sample size. Of course, the required N must be estimated, either by analysis (as done in [8]) or by sampling (see [6] Section 4). The concentration of log Y must also be established.…”
Section: Importance Samplingmentioning
confidence: 99%
“…Previous non-asymptotic analyses of importance sampling have focused on the first two. For instance, Chatterjee & Diaconis (2015) suggested -under certain concentration condition on the density-the necessity and sufficiency of the sample size being larger than the exponential of the Kullback-Leibler divergence between target and proposal, and Agapiou et al (2015) proved the sufficiency of the sample size being larger than the χ 2 divergence for autonormalized importance sampling for bounded test functions. Indeed the function U in the above argument is given by U (N ) = log N when D is the Kullback-Leibler divergence, and U (N ) = N − 1 for the χ 2 divergence, in agreement with Chatterjee & Diaconis (2015) and complementing Agapiou et al (2015).…”
Section: Introductionmentioning
confidence: 99%
“…See L'Ecuyer et al (2010) for an analysis of how hard it is to estimate variance in the importance sampling context. This issue is especially severe in cases with large K and small n. Chatterjee and Diaconis (2018) also remark on the difficulty of using variances in importance sampling.…”
Section: Linear Combinationsmentioning
confidence: 99%