2019
DOI: 10.3150/17-bej1007
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian consistency for a nonparametric stationary Markov model

Abstract: We consider posterior consistency for a Markov model with a novel class of nonparametric prior. In this model, the transition density is parameterized via a mixing distribution function. Therefore, the Wasserstein distance between mixing measures can be used to construct neighborhoods of a transition density. The Wasserstein distance is sufficiently strong, e.g. if the mixing distributions are compactly supported, it dominates the sup-L1 metric. We provide sufficient conditions for posterior consistency with r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Müller et al (1997) use normalized weights that employ a finite MoE framework to accommodate nonlinearity. Posterior consistency for BNP transition density estimation has been explored by Tang and Ghosal (2007a), Tang and Ghosal (2007b), and Chae and Walker (2019). Many of the above methods assume first-order time dependence.…”
Section: Introductionmentioning
confidence: 99%
“…Müller et al (1997) use normalized weights that employ a finite MoE framework to accommodate nonlinearity. Posterior consistency for BNP transition density estimation has been explored by Tang and Ghosal (2007a), Tang and Ghosal (2007b), and Chae and Walker (2019). Many of the above methods assume first-order time dependence.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, [3] proposed the use of the Wasserstein distance in the implementation of Approximate Bayesian Computation (ABC) to approximate the posterior distribution. In nonparametric Bayesian inference, [36,12] used Wasserstein metrics to study asymptotic properties of posterior distributions, but W p was considered as a distance between mixing distributions rather than a distance between mixture densities themselves. As a result, the Wasserstein metrics in these papers yielded a stronger topology than the total variation distance on the space of density functions.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Bernton et al (2019) proposed the use of the Wasserstein distance in the implementation of Approximate Bayesian Computation (ABC) to approximate the posterior distribution. In nonparametric Bayesian inference, Chae and Walker (2019a), Nguyen (2013) used Wasserstein metrics to study asymptotic properties of posterior distributions, but W p was considered as a distance between mixing distributions rather than a distance between mixture densities themselves. As a result, the Wasserstein metrics in these papers yielded a stronger topology than the total variation distance on the space of density functions.…”
Section: Introductionmentioning
confidence: 99%