2006
DOI: 10.1007/s11222-006-5536-2
|View full text |Cite
|
Sign up to set email alerts
|

On directional Metropolis–Hastings algorithms

Abstract: New Metropolis-Hastings algorithms using directional updates are introduced in this paper. Each iteration of a directional Metropolis-Hastings algorithm consists of three steps (i) generate a line by sampling an auxiliary variable, (ii) propose a new state along the line, and (iii) accept/reject according to the Metropolis-Hastings acceptance probability. We consider two classes of directional updates. The first uses a point in R n as auxiliary variable, the second an auxiliary direction vector. The proposed a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

2008
2008
2012
2012

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…Gilks et al (1998), Liu et al (2000) and Craiu and Lemieux (2007), Green and Mira (2001), Eidsvik and Tjelmeland (2006). In this paper, we refer to a more global version of adaptation which is based on learning the geography of π "on the fly" from all the samples available up to the current time t. Such an approach violates the Markovian property as the subsequent realizations of the chain depend not only on the current state but also on all past realizations.…”
Section: Introductionmentioning
confidence: 99%
“…Gilks et al (1998), Liu et al (2000) and Craiu and Lemieux (2007), Green and Mira (2001), Eidsvik and Tjelmeland (2006). In this paper, we refer to a more global version of adaptation which is based on learning the geography of π "on the fly" from all the samples available up to the current time t. Such an approach violates the Markovian property as the subsequent realizations of the chain depend not only on the current state but also on all past realizations.…”
Section: Introductionmentioning
confidence: 99%
“…Adaptive methods include those that use the chain to modify the proposal distribution [17,18] and adaptive direction samplers [7,11] that maintain multiple points in parameter space. While adaptive methods speed convergence by more efficiently sampling the parameter space, other methods accelerate the posterior evaluations required to compute the Hastings ratio at each step.…”
Section: Mcmcmentioning
confidence: 99%
“…Instead, we compute approximations to the moments by generating samples from the posterior distribution and calculating the discrete analogues. Samples may be generated from an implicitly defined posterior by Markov chain Monte Carlo (MCMC) methods [2,7,10,11,17,18,19,28,30] whereby a Markov chain is established whose stationary distribution is the posterior.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The implementation of MCMC does not directly find the maximum likelihoods and can avoid monotone convergence. MetropolisHastings algorithms are commonly adopted in MCMC by construct an ergodic Markov chain using rejection mechanism (Eidsvik and Tjelmeland, 2006;Strid, 2010). …”
Section: A Framework For Comprehensive Uncertainty Analysismentioning
confidence: 99%