When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robust regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (DA) algorithm and a corresponding Haar PX-DA algorithm that can be used to explore this posterior. This paper provides conditions (on the mixing density) for geometric ergodicity of the Markov chains underlying these Markov chain Monte Carlo algorithms. Letting d denote the dimension of the response, the main result shows that the DA and Haar PX-DA Markov chains are geometrically ergodic whenever the mixing density is generalized inverse Gaussian, log-normal, inverted Gamma (with shape parameter larger than d=2) or Fréchet (with shape parameter larger than d=2). The results also apply to certain subsets of the Gamma, F and Weibull families.We assume throughout the paper that .N1/ and .N 2/ hold. Under these two conditions, the Markov chain of interest is well-defined, and we can engage in a convergence rate analysis
We study MCMC algorithms for Bayesian analysis of a linear regression model with generalized hyperbolic errors. The Markov operators associated with the standard data augmentation algorithm and a sandwich variant of that algorithm are shown to be trace-class.
Gibbs sampling is a widely popular Markov chain Monte Carlo algorithm which is often used to analyze intractable posterior distributions associated with Bayesian hierarchical models. The goal of this article is to introduce an alternative to Gibbs sampling that is particularly well suited for Bayesian models which contain latent or missing data. The basic idea of this hybrid algorithm is to update the latent data from its full conditional distribution at every iteration, and then use a random scan to update the parameters of interest. The hybrid algorithm is often easier to analyze from a theoretical standpoint than the deterministic or random scan Gibbs sampler. We highlight a positive result in this direction from Abrahamsen and Hobert (2018), who proved geometric ergodicity of the hybrid algorithm for a Bayesian version of the general linear mixed model with a continuous shrinkage prior. The convergence rate of the Gibbs sampler for this model remains unknown. In addition, we provide new geometric ergodicity results for the hybrid algorithm and the Gibbs sampler for two classes of Bayesian linear regression models with non-Gaussian errors. In both cases, the conditions under which the hybrid algorithm is geometric are much weaker than the corresponding conditions for the Gibbs sampler. Finally, we show that the hybrid algorithm is amenable to a modified version of the sandwich methodology of Hobert and Marchev (2008), which can be used to speed up the convergence rate of the underlying Markov chain while requiring roughly the same computational effort per iteration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.