2021
DOI: 10.48550/arxiv.2104.06581
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the implied weights of linear regression for causal inference

Abstract: In this paper, we derive and analyze the implied weights of linear regression methods for causal inference. We obtain new closed-form, finite-sample expressions of the weights for various types of estimators based on multivariate linear regression models. In finite samples, we show that the implied weights have minimum variance, exactly balance the means of the covariates (or transformations thereof) included in the model, and produce estimators that may not be sample bounded. Furthermore, depending on the spe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 43 publications
0
8
0
Order By: Relevance
“…Following Chattopadhyay and Zubizarreta (2022), we focus on two common regression modeling approaches. In the first approach, the outcome is regressed on the covariates and the treatment indicator without any treatment-covariate interactions, and the average treatment effect is estimated by the coefficient of the treatment indicator.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following Chattopadhyay and Zubizarreta (2022), we focus on two common regression modeling approaches. In the first approach, the outcome is regressed on the covariates and the treatment indicator without any treatment-covariate interactions, and the average treatment effect is estimated by the coefficient of the treatment indicator.…”
Section: Methodsmentioning
confidence: 99%
“…Recently, Chattopadhyay and Zubizarreta (2022) offered a connection between weighting and linear regression methods for causal inference. In particular, they obtained new closed form expressions for the implicit weights in different methods involving the linear regression model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Many existing approaches can be written as examples of Equation 12 with various choices of ζ and χ, including the stable balancing weights approach (Zubizarreta, 2015); the lasso minimum distance estimator of the inverse propensity score ; the regularized calibrated propensity score estimator ; and entropy balancing (Hainmueller, 2012). In fact, even imputing µ 1 by averaging the predictions of a least squares regression of Y on X fit on the treated units can be written in this form: it is a weighted average of observations Y i with weights solving the problem above (11) for σ 2 = 0 (Chattopadhyay & Zubizarreta, 2021). Similarly, ridge regression corresponds to σ 2 > 0 (Ben-Michael, Feller, & Rothstein, 2021).…”
Section: Balancing Approach To Weightingmentioning
confidence: 99%
“…In the case of personalization, our estimand is equivalent to a conditional average treatment effect (see, e.g., Chapter 12 of Imbens and Rubin 13 2015). In what follows, we characterize the target population by a covariate profile x*; 39 that is, a vector of q ≥ p summary statistics of its observed covariate distribution. Usually, x* amounts to the means of the covariates in the target population, but the profile will ideally include higher order summary statistics to more completely characterize the distribution of covariates in the target population.…”
mentioning
confidence: 99%