2017
DOI: 10.1214/17-sts611
|View full text |Cite
|
Sign up to set email alerts
|

Importance Sampling: Intrinsic Dimension and Computational Cost

Abstract: Abstract.The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for the pra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

11
171
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(182 citation statements)
references
References 100 publications
(162 reference statements)
11
171
0
Order By: Relevance
“…The physical interpretation of (49) is that the scaling of the error is controlled by the variance of the weightŝ W . There are two contributing factors to this variance, which are the randomness in the Jarzynski integration and the fact that C i is distributed as p c .…”
Section: Mean Square Errormentioning
confidence: 99%
“…The physical interpretation of (49) is that the scaling of the error is controlled by the variance of the weightŝ W . There are two contributing factors to this variance, which are the randomness in the Jarzynski integration and the fact that C i is distributed as p c .…”
Section: Mean Square Errormentioning
confidence: 99%
“…In [14], Novak and Mathé prove that it is optimal on a certain class of tuples (f, u). However, recently this Monte Carlo approach attracted considerable attention, let us mention here [1,4]. In particular, in [1] upper error bounds not only for bounded functions f are provided and the relevance of the method for inverse problems is presented.…”
Section: Computementioning
confidence: 99%
“…However, since this is not illustrative for the reader we consider the random field at two fixed points in the spatial domain; these points are x (1) = (0.5, 0.5) and x (2) = (0.75, 0.25). Before looking at the KS distances of the distributions of θ N sto (x (1) ) and θ N sto (x (2) ) we assess their posterior mean estimates. The relative error of the posterior means in these points compared to the true values θ true (x (1) ) and θ true (x (2) ) is given in Figure 5.13.…”
Section: Posterior Approximation In High Dimensionsmentioning
confidence: 99%
“…Before looking at the KS distances of the distributions of θ N sto (x (1) ) and θ N sto (x (2) ) we assess their posterior mean estimates. The relative error of the posterior means in these points compared to the true values θ true (x (1) ) and θ true (x (2) ) is given in Figure 5.13. While the estimate of θ N sto (x (1) ) is quite accurate, the estimate of θ N sto (x (2) ) is very inaccurate -consistently in both methods.…”
Section: Posterior Approximation In High Dimensionsmentioning
confidence: 99%
See 1 more Smart Citation