2001
DOI: 10.1214/ss/1015346320
|View full text |Cite
|
Sign up to set email alerts
|

Optimal scaling for various Metropolis-Hastings algorithms

Abstract: We review and extend results related to optimal scaling of Metropolis-Hastings algorithms. We present various theoretical results for the high-dimensional limit. We also present simulation studies which confirm the theoretical results in finite-dimensional contexts.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

18
816
0
5

Year Published

2004
2004
2018
2018

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 888 publications
(839 citation statements)
references
References 9 publications
18
816
0
5
Order By: Relevance
“…To get some idea of how to choose κ, we ran Markov chains for different values of κ and compared their estimated first-order autocorrelations. This suggested that the optimal value of κ corresponds to an acceptance probability that is slightly above 0.40, in close agreement with Roberts & Rosenthal (2001) and Gelman et al (1996).…”
Section: Updates For ρ σ 2 and λsupporting
confidence: 85%
“…To get some idea of how to choose κ, we ran Markov chains for different values of κ and compared their estimated first-order autocorrelations. This suggested that the optimal value of κ corresponds to an acceptance probability that is slightly above 0.40, in close agreement with Roberts & Rosenthal (2001) and Gelman et al (1996).…”
Section: Updates For ρ σ 2 and λsupporting
confidence: 85%
“…In practice, σ 2 µxt are carefully chosen such that the acceptance rates of log µ xt are within the recommended range 0.15-0.45 (Roberts and Rosenthal (2001)). Following Czado et al (2005), we develop a simple automatic trial and error search algorithm for tuning σ 2 µxt , which starts off with a crude search:…”
Section: Mh Step For Log µ Xtmentioning
confidence: 99%
“…It turns out that the rough pattern of posterior variances of log µ xt in a given year can potentially be deduced from this set of approximate optimal proposal variances, which we shall verify later. This can be attributed to the finding in Roberts and Rosenthal (2001) that the optimal proposal variance for a MH algorithm with a univariate normal distribution as its target is proportional to the posterior variance (with 2.38 2 as the proportionality constant).…”
Section: Mh Step For Log µ Xtmentioning
confidence: 99%
“…This quantity b n is known [23] to measure the convergence slow-down factor of a chain using the covariance estimate Σ n obtained after n iterations, compared to a chain using the true covariance Σ. The plot clearly shows that the values of b n are initially very large, and then get close to 1 after about 300,000 iterations:…”
Section: A 100-dimensional Examplementioning
confidence: 99%
“…For example, it is known (see [23] and the references therein) that if target distribution π(·) is (approximately) a high-dimensional normal distribution with covariance Σ, then the optimal Gaussian proposal distribution for a RWM algorithm is equal to N (x, (2.38) 2 d −1 Σ). Now, the target covariance Σ is generally unknown, but it can be approximated by Σ n , the empirical covariance of the first n iterations of the Markov chain.…”
Section: A 100-dimensional Examplementioning
confidence: 99%