2022
DOI: 10.1007/s11222-022-10080-8
|View full text |Cite
|
Sign up to set email alerts
|

Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics

Abstract: High-dimensional limit theorems have been shown useful to derive tuning rules for finding the optimal scaling in random walk Metropolis algorithms. The assumptions under which weak convergence results are proved are, however, restrictive: the target density is typically assumed to be of a product form. Users may thus doubt the validity of such tuning rules in practical applications. In this paper, we shed some light on optimal scaling problems from a different perspective, namely a large-sample one. This allow… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…For MH algorithms, [3,34] relax the independence assumption, while [29] relax the identically distributed assumption. Additionally, [40] present a proof of weak convergence for MH for more general targets, and [33] provide optimal scaling results for general Bayesian targets using large-sample asymptotics. In these situations, extensions to other acceptance probabilities are similarly possible.…”
Section: Discussionmentioning
confidence: 99%
“…For MH algorithms, [3,34] relax the independence assumption, while [29] relax the identically distributed assumption. Additionally, [40] present a proof of weak convergence for MH for more general targets, and [33] provide optimal scaling results for general Bayesian targets using large-sample asymptotics. In these situations, extensions to other acceptance probabilities are similarly possible.…”
Section: Discussionmentioning
confidence: 99%