The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, n, and/or number of covariates, p. This article provides a thorough convergence complexity analysis of Albert and Chib's (1993) data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as n → ∞ (with p fixed), and as p → ∞ (with n fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis.
The utility of a Markov chain Monte Carlo algorithm is, in large part, determined by the size of the spectral gap of the corresponding Markov operator. However, calculating (and even approximating) the spectral gaps of practical Monte Carlo Markov chains in statistics has proven to be an extremely difficult and often insurmountable task, especially when these chains move on continuous state spaces.In this paper, a method for accurate estimation of the spectral gap is developed for general state space Markov chains whose operators are non-negative and trace-class. The method is based on the fact that the second largest eigenvalue (and hence the spectral gap) of such operators can be bounded above and below by simple functions of the power sums of the eigenvalues. These power sums often have nice integral representations. A classical Monte Carlo method is proposed to estimate these integrals, and a simple sufficient condition for finite variance is provided. This leads to asymptotically valid confidence intervals for the second largest eigenvalue (and the spectral gap) of the Markov operator. In contrast with previously existing techniques, our method is not based on a near-stationary version of the Markov chain, which, paradoxically, cannot be obtained in a principled manner without bounds on the spectral gap. On the other hand, it can be quite expensive from a computational standpoint. The efficiency of the method is studied both theoretically and empirically. want to approximate the integralare the first m elements of a well-behaved Markov chain with stationary density π(·). Unlike classical Monte Carlo estimators,Ĵ m is not based on iid random elements. Indeed, the elements of the chain are typically neither identically distributed nor independent. Given var π f, the variance of f (·) under the stationary distribution, the accuracy ofĴ m is primarily determined by two factors: (i) the convergence rate of the Markov chain, and (ii) the correlation between the f (Φ k )s when the chain is stationary. These two factors are related, and can be analyzed jointly under an operator theoretic framework.The starting point of the operator theoretic approach is the Hilbert space of functions that are square integrable with respect to the target pdf, π(·). The Markov transition function that gives rise to Φ = {Φ k } ∞ k=0 defines a linear (Markov) operator on this Hilbert space. (Formal definitions are given in Section 2.) If Φ is reversible, then it is geometrically ergodic if and only if the corresponding Markov operator admits a positive spectral gap (Roberts and Rosenthal , 1997;Kontoyiannis and Meyn, 2012). The gap, which is a real number in (0, 1], plays a fundamental role in determining the mixing properties of the Markov chain, with larger values corresponding to better performance. For instance, suppose Φ 0 has pdf π 0 (·) such that dπ 0 /dπ is in the Hilbert space, and let d(Φ k ; π) denote the total variation distance between the distribution of Φ k and the chain's stationary distribution. Then, if δ denotes the s...
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robust regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (DA) algorithm and a corresponding Haar PX-DA algorithm that can be used to explore this posterior. This paper provides conditions (on the mixing density) for geometric ergodicity of the Markov chains underlying these Markov chain Monte Carlo algorithms. Letting d denote the dimension of the response, the main result shows that the DA and Haar PX-DA Markov chains are geometrically ergodic whenever the mixing density is generalized inverse Gaussian, log-normal, inverted Gamma (with shape parameter larger than d=2) or Fréchet (with shape parameter larger than d=2). The results also apply to certain subsets of the Gamma, F and Weibull families.We assume throughout the paper that .N1/ and .N 2/ hold. Under these two conditions, the Markov chain of interest is well-defined, and we can engage in a convergence rate analysis
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.