2021
DOI: 10.1111/insr.12463
|View full text |Cite
|
Sign up to set email alerts
|

Rao–Blackwellisation in the Markov Chain Monte Carlo Era

Abstract: Summary Rao–Blackwellisation is a notion often occurring in the MCMC literature, with possibly different meanings and connections with the original Rao–Blackwell theorem as established by C.R. Rao in 1945 and D. Blackwell in 1947, including a reduction of the variance of the resulting Monte Carlo approximations. This survey reviews some of the meanings of the term.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 54 publications
0
2
0
Order By: Relevance
“…If the sufficient statistic t(X) is furthermore complete, to Rao-Blackwellize the unbiased estimator results in a uniformly minimal variance unbiased estimator (UMVUE; Lehmann & Scheffé (1950), implying that it cannot be further improved upon under squared loss. The Rao-Blackwell theorem has important implications in the design of Markov Chain Monte Carlo methods (Robert & Roberts, 2021, Kong et al, 2007, among many other appli-cations. It provides a guideline and inspires methods to construct estimators with reduced Monte Carlo variability (e.g.…”
Section: Conditioningmentioning
confidence: 99%
“…If the sufficient statistic t(X) is furthermore complete, to Rao-Blackwellize the unbiased estimator results in a uniformly minimal variance unbiased estimator (UMVUE; Lehmann & Scheffé (1950), implying that it cannot be further improved upon under squared loss. The Rao-Blackwell theorem has important implications in the design of Markov Chain Monte Carlo methods (Robert & Roberts, 2021, Kong et al, 2007, among many other appli-cations. It provides a guideline and inspires methods to construct estimators with reduced Monte Carlo variability (e.g.…”
Section: Conditioningmentioning
confidence: 99%
“…In computing the generalized Bayes estimator (posterior mean of β), we applied the Rao-Blackwellization technique for variance reduction (Robert and Roberts, 2021). Namely, we replaced step 2 of the above algorithm with β t = (1 − κ t−1 )y and took average of sampled β after burn-in.…”
Section: Algorithm For Posterior Samplingmentioning
confidence: 99%