2017
DOI: 10.48550/arxiv.1710.09759
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Directional Metropolis-Hastings

Abstract: We propose a new kernel for Metropolis Hastings called Directional Metropolis Hastings (DMH) with multivariate update where the proposal kernel has state dependent covariance matrix. We use the derivative of the target distribution at the current state to change the orientation of the proposal distribution, therefore producing a more plausible proposal. We study the conditions for geometric ergodicity of our algorithm and provide necessary and sufficient conditions for convergence. We also suggest a scheme for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…2 were omitted, the Hop algorithm would be a special case of the Directional Metropolis-Hastings algorithm of Mallik and Jones (2017), which also allows for a MALA-like offset of the proposal mean. However, unlike the algorithm in Mallik and Jones (2017), the Hop algorithm is specifically intended for jumping between contours.…”
Section: Hopmentioning
confidence: 99%
See 2 more Smart Citations
“…2 were omitted, the Hop algorithm would be a special case of the Directional Metropolis-Hastings algorithm of Mallik and Jones (2017), which also allows for a MALA-like offset of the proposal mean. However, unlike the algorithm in Mallik and Jones (2017), the Hop algorithm is specifically intended for jumping between contours.…”
Section: Hopmentioning
confidence: 99%
“…2 were omitted, the Hop algorithm would be a special case of the Directional Metropolis-Hastings algorithm of Mallik and Jones (2017), which also allows for a MALA-like offset of the proposal mean. However, unlike the algorithm in Mallik and Jones (2017), the Hop algorithm is specifically intended for jumping between contours. As we shall see in Theorem 2 below, which is proved in Appendix C, and the simulations in Section 3, the position-dependent scaling brings enormous (and, perhaps, unexpected) gains in efficiency for typical targets.…”
Section: Hopmentioning
confidence: 99%
See 1 more Smart Citation
“…This is in line with the intuition that the jump size should be smaller in directions orthogonal to that containing the signal. In Livingstone (2015) (see also Mallik and Jones (2017) for a similar algorithm and an adaptive version of it), the author studies the efficiency (in non-asymptotic regime) of RWMH using a position dependent covariance matrix (an approach which actually gathers a number of the aforementioned method under a generic framework). In particular, it is established that for sparse and filamentary distributions and under regulatory assumptions, the convergence of the position dependent RWMH occurs at a geometric rate, something which does not always hold for the standard RWMH.…”
Section: Introductionmentioning
confidence: 99%