2018
DOI: 10.1214/18-ejs1506
|View full text |Cite
|
Sign up to set email alerts
|

Convergence analysis of the block Gibbs sampler for Bayesian probit linear mixed models with improper priors

Abstract: In this article, we consider Markov chain Monte Carlo (MCMC) algorithms for exploring the intractable posterior density associated with Bayesian probit linear mixed models under improper priors on the regression coefficients and variance components. In particular, we construct a two-block Gibbs sampler using the data augmentation (DA) techniques. Furthermore, we prove geometric ergodicity of the Gibbs sampler, which is the foundation for building central limit theorems for MCMC based estimators and subsequent … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 14 publications
(26 citation statements)
references
References 31 publications
1
25
0
Order By: Relevance
“…Availability of a Markov chain CLT has been demonstrated for myriad MCMC algorithms for common statistical models. Here we provide an incomplete list: linear models Hobert 2012, 2015), generalized linear models including the probit model (Roy andHobert 2007, Chakraborty andKhare 2017), the popular logistic model (Choi andHobert 2013, Wang andRoy 2018c) and the robit model (Roy 2012), generalized linear mixed models including the probit mixed model (Wang and Roy 2018b), and the logistic mixed model (Wang and Roy 2018a), quantile regression models (Khare and Hobert 2012), multivariate regression models (Roy andHobert 2010, Hobert et al 2018), penalized regression and variable selection models (Khare and Hobert 2013, Roy and Chakraborty 2017, Vats 2017. So far we have described the honest MCMC in the context of estimating means of univariate functions.…”
Section: Honest Mcmcmentioning
confidence: 99%
“…Availability of a Markov chain CLT has been demonstrated for myriad MCMC algorithms for common statistical models. Here we provide an incomplete list: linear models Hobert 2012, 2015), generalized linear models including the probit model (Roy andHobert 2007, Chakraborty andKhare 2017), the popular logistic model (Choi andHobert 2013, Wang andRoy 2018c) and the robit model (Roy 2012), generalized linear mixed models including the probit mixed model (Wang and Roy 2018b), and the logistic mixed model (Wang and Roy 2018a), quantile regression models (Khare and Hobert 2012), multivariate regression models (Roy andHobert 2010, Hobert et al 2018), penalized regression and variable selection models (Khare and Hobert 2013, Roy and Chakraborty 2017, Vats 2017. So far we have described the honest MCMC in the context of estimating means of univariate functions.…”
Section: Honest Mcmcmentioning
confidence: 99%
“…Remark 1. As in Wang and Roy (2017), we assume that the prior distribution for τ j is a truncated Gamma distribution. However, while implementing the block Gibbs sampler in practice, a number slightly larger than the machine precision zero can be treated as τ 0 , practically avoiding the need to use any rejection sampling algorithms to draw from the truncated conditional distribution of τ .…”
Section: Two-block Gibbs Samplermentioning
confidence: 99%
“…Roy and Hobert (2007) and Chakraborty and Khare (2017) proved the geometric ergodicity of this DA algorithm for Bayesian probit regression model under improper and proper priors respectively. Wang and Roy (2017) recently extended the convergence rate analysis of the block Gibbs samplers based on this DA technique for Bayesian probit linear mixed models under both proper and improper priors.…”
Section: Introductionmentioning
confidence: 99%
“…Since we consider both proper and improper priors on (β, τ), if improper priors are used, then c(y) is not necessarily finite. Posterior propriety of nonlinear mixed models with general link functions is studied in Chen, Shao, and Xu (2002) and Wang and Roy (2018b).…”
Section: Introductionmentioning
confidence: 99%
“…Wang and Roy (2018a) consider convergence analysis of a Gibbs sampler for LLMMs under a truncated gamma prior on τ and a proper normal prior on β. GE of Gibbs samplers for probit and logistic GLMs under different priors have been established in the literature (Chakraborty and Khare, 2017;Choi and Hobert, 2013;Roy and Hobert, 2007;Wang and Roy, 2018c). Also, GE of Gibbs samplers for probit mixed model and normal linear mixed models under improper priors on the regression coefficients and variance components is considered in Wang and Roy (2018b) and Román and Hobert (2012),…”
Section: Introductionmentioning
confidence: 99%