In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the literature on bias-increasing covariates by first introducing a way to decompose omitted variable bias into three constituent parts: bias due to an unobserved confounder, bias due toexcludingobserved covariates, and bias due to amplification. This leads to two important findings. Although instruments have been the primary focus of the bias amplification literature to date, we identify the fact that the popular approach of adding group fixed effects can lead to bias amplification as well. This is an important finding because many practitioners think that fixed effects are a convenient way to account for any and all group-level confounding and are at worst harmless. The second finding introduces the concept of biasunmaskingand shows how it can be even more insidious than bias amplification in some cases. After introducing these new results analytically, we use constructed observational placebo studies to illustrate bias amplification and bias unmasking with real data. Finally, we propose a way to add bias decomposition information to graphical displays for sensitivity analysis to help practitioners think through the potential for bias amplification and bias unmasking in actual applications.
We are concerned with the unbiased estimation of a treatment effect in the context of non-experimental studies with grouped or multilevel data. When analyzing such data with this goal, practitioners typically include as many predictors (controls) as possible, in an attempt to satisfy ignorability of the treatment assignment. In the multilevel setting with two levels, there are two classes of potential confounders that one must consider, and attempts to satisfy ignorability conditional on just one set would lead to a different treatment effect estimator than attempts to satisfy the other (or both). The three estimators considered in this paper are so-called "within," "between" and OLS estimators. We generate bounds on the potential differences in bias for these competing estimators to inform model selection. Our approach relies on a parametric model for grouped data and omitted confounders and establishes a framework for sensitivity analysis in the two-level modeling context. The method relies on information obtained from parameters estimated under a variety of multilevel model specifications. We characterize the strength of the confounding and corresponding bias using easily interpretable parameters and graphical displays. We apply this approach to data from a multinational educational evaluation study. We demonstrate the extent to which different treatment effect estimators may be robust to potential unobserved individual-and group-level confounding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.