2021
DOI: 10.48550/arxiv.2102.13218
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpretable Sensitivity Analysis for Balancing Weights

Abstract: Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to obtain weights that directly minimizes covariate imbalance. In particular, we adapt a sensitivity analysis framework using the percentile bootstrap for a broad class … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Unfortunately no sensitivity analysis appropriate for block bootstrap inference has yet been developed, either for time agnosticism or other strong assumptions such as ignorability. The many existing methods for sensitivity analysis (developed primarily with ignorability assumptions in mind) are unsatisfying in our framework for a variety of different reasons: some rely on randomization inference (Rosenbaum, 2002b), others focus on weighting methods rather than matching (Zhao et al, 2019;Soriano et al, 2021), and others are limited to specific outcome measures (Ding and VanderWeele, 2016) or specific test statistics (Cinelli and Hazlett, 2020). We view the development of compelling sensitivity analysis approaches to be an especially important methodological objective for matching under rolling enrollment.…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately no sensitivity analysis appropriate for block bootstrap inference has yet been developed, either for time agnosticism or other strong assumptions such as ignorability. The many existing methods for sensitivity analysis (developed primarily with ignorability assumptions in mind) are unsatisfying in our framework for a variety of different reasons: some rely on randomization inference (Rosenbaum, 2002b), others focus on weighting methods rather than matching (Zhao et al, 2019;Soriano et al, 2021), and others are limited to specific outcome measures (Ding and VanderWeele, 2016) or specific test statistics (Cinelli and Hazlett, 2020). We view the development of compelling sensitivity analysis approaches to be an especially important methodological objective for matching under rolling enrollment.…”
Section: Discussionmentioning
confidence: 99%
“…A good way to approach such comparisons is to think about the relative sizes of the biases contributed by ignoring each variable, since our ultimate goal is to avoid biases in treatment effect estimation. Inspired by Cinelli and Hazlett (2020) and Soriano et al (2021), we plot quantities which correspond to bias estimates for the treatment effect under a simple set of statistical models and which take on the convenient form of hyperbolic curves on the jointVIP. These curves, which arise from the classical omitted variable bias (OVB) framework, provide valuable context about the relative amounts of potential bias contributed by distant variables, making comparisons easier.…”
Section: Study Populationmentioning
confidence: 99%
“…Several recent papers have proposed using an alternative approach to performing sensitivity analysis in the form of marginal sensitivity models (Zhao et al (2019), Soriano et al (2021)). Marginal sensitivity models define a class of sensitivity models that bound the underlying error in the selection probabilities.…”
Section: A2 Relationship To Marginal Sensitivity Modelsmentioning
confidence: 99%
“…Zhao and Percival (2016) demonstrated that entropy balancing weights are implicitly estimating propensity score weights, with a modified loss function. See Wang and Zubizarreta (2020),Soriano et al (2021), andBen-Michael et al (2020) for more discussion on the connection between balancing weights and inverse-propensity score weighting.…”
mentioning
confidence: 99%