1995
DOI: 10.2307/2986138
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Rejection Metropolis Sampling within Gibbs Sampling

Abstract: Gibbs sampling is a powerful technique for statistical inference. It involves little more than sampling from full conditional distributions, which can be both complex and computationally expensive to evaluate. Gilks and Wild have shown that in practice full conditionals are often logconcave, and they proposed a method of adaptive rejection sampling for efficiently sampling from univariate log-concave distributions. In this paper, to deal with non-log-concave full conditional distributions, we generalize adapti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
409
0
4

Year Published

1999
1999
2017
2017

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 552 publications
(414 citation statements)
references
References 24 publications
1
409
0
4
Order By: Relevance
“…If not, generic univariate simulation methods such as Adaptive Rejection Metropolis Sampling (Gilks et al 1995) can be employed. We now consider a couple of examples.…”
Section: Slice Samplermentioning
confidence: 99%
“…If not, generic univariate simulation methods such as Adaptive Rejection Metropolis Sampling (Gilks et al 1995) can be employed. We now consider a couple of examples.…”
Section: Slice Samplermentioning
confidence: 99%
“…, N, independently draw a sample v (7) with sample size m i by a MCMC sample, e.g. via the Gibbs sampler with the adaptive rejection sampling algorithm (Gilks et al, 1995).…”
Section: M-stepmentioning
confidence: 99%
“…11 In particular, we employed the Adaptive Rejection Metropolis Sampling (ARMS) algorithm of Gilks, Best, and Tan (1995), and Gilks, Neal, Best, and Tan (1997). 12 In this case, we use independent beta priors on ψ 1 (= α + β) and ψ 2 (= β/(α + β)), with mean 3/4 and standard deviation .1443, which are centred around the typical values estimated by previous studies with monthly return data.…”
Section: Simulated Bayesian Inferencementioning
confidence: 99%