In this article, we consider Markov chain Monte Carlo (MCMC) algorithms for exploring the intractable posterior density associated with Bayesian probit linear mixed models under improper priors on the regression coefficients and variance components. In particular, we construct a two-block Gibbs sampler using the data augmentation (DA) techniques. Furthermore, we prove geometric ergodicity of the Gibbs sampler, which is the foundation for building central limit theorems for MCMC based estimators and subsequent inferences. The conditions for geometric convergence are similar to those guaranteeing posterior propriety. We also provide conditions for the propriety of posterior distributions with a general link function when the design matrices take commonly observed forms. In general, the Haar parameter expansion for DA (PX-DA) algorithm is an improvement of the DA algorithm and it has been shown that it is theoretically at least as good as the DA algorithm. Here we construct a Haar PX-DA algorithm, which has essentially the same computational cost as the two-block Gibbs sampler.
The Logistic regression model is the most popular model for analyzing binary data. In the absence of any prior information, an improper flat prior is often used for the regression coefficients in Bayesian logistic regression models. The resulting intractable posterior density can be explored by running Polson et al.'s (2013) data augmentation (DA) algorithm. In this paper, we establish that the Markov chain underlying Polson et al.'s (2013) DA algorithm is geometrically ergodic. Proving this theoretical result is practically important as it ensures the existence of central limit theorems (CLTs) for sample averages under a finite second moment condition. The CLT in turn allows users of the DA algorithm to calculate standard errors for posterior estimates.
In this article, we construct a two-block Gibbs sampler using Polson et al.'s (2013) data augmentation technique with Pólya-Gamma latent variables for Bayesian logistic linear mixed models under proper priors. Furthermore, we prove the uniform ergodicity of this Gibbs sampler, which guarantees the existence of the central limit theorems for MCMC based estimators.
Abstract-Most sampling techniques for online social networks (OSNs) are based on a particular sampling method on a single graph, which is referred to as a statistic. However, various realizing methods on different graphs could possibly be used in the same OSN, and they may lead to different sampling efficiencies, i.e., asymptotic variances. To utilize multiple statistics for accurate measurements, we formulate a mixture sampling problem, through which we construct a mixture unbiased estimator which minimizes the asymptotic variance. Given fixed sampling budgets for different statistics, we derive the optimal weights to combine the individual estimators; given a fixed total budget, we show that a greedy allocation towards the most efficient statistic is optimal. In practice, the sampling efficiencies of statistics can be quite different for various targets and are unknown before sampling. To solve this problem, we design a two-stage framework which adaptively spends a partial budget to test different statistics and allocates the remaining budget to the inferred best statistic. We show that our two-stage framework is a generalization of 1) randomly choosing a statistic and 2) evenly allocating the total budget among all available statistics, and our adaptive algorithm achieves higher efficiency than these benchmark strategies in theory and experiment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.