2017
DOI: 10.2139/ssrn.2967600
|View full text |Cite
|
Sign up to set email alerts
|

Inference on Breakdown Frontiers

Abstract: Given a set of baseline assumptions, a breakdown frontier is the boundary between the set of assumptions which lead to a specific conclusion and those which do not. In a potential outcomes model with a binary treatment, we consider two conclusions: First, that ATE is at least a specific value (e.g., nonnegative) and second that the proportion of units who benefit from treatment is at least a specific value (e.g., at least 50%). For these conclusions, we derive the breakdown frontier for two kinds of assumption… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(28 citation statements)
references
References 71 publications
0
28
0
Order By: Relevance
“…Our method "regularizes" or smoothes the objective function, and we show that this method leads to a straightforward bias correction method. While the idea of regularization appears in some other contexts, such as Haile & Tamer (2003), Chernozhukov, Kocatulum & Menzel (2015) and Masten & Poirier (2017), we formally show that this approach has uniform validity in the context of inference with simulated variables. Our regularization method is based on the class of µ-smooth approximations studied in the non-smooth optimization literature (Nesterov, 2005;Beck & Teboulle, 2012).…”
Section: Introductionmentioning
confidence: 82%
See 1 more Smart Citation
“…Our method "regularizes" or smoothes the objective function, and we show that this method leads to a straightforward bias correction method. While the idea of regularization appears in some other contexts, such as Haile & Tamer (2003), Chernozhukov, Kocatulum & Menzel (2015) and Masten & Poirier (2017), we formally show that this approach has uniform validity in the context of inference with simulated variables. Our regularization method is based on the class of µ-smooth approximations studied in the non-smooth optimization literature (Nesterov, 2005;Beck & Teboulle, 2012).…”
Section: Introductionmentioning
confidence: 82%
“…Using these results, we provide functional forms of smooth approximations to some of the commonly used test statistics. The idea of regularizing test statistics (or estimated bounds) also appears in related contexts (Haile & Tamer, 2003;Chernozhukov et al, 2015;Kaido, 2017;Masten & Poirier, 2017). Our contribution here is to show its uniform validity in the context of inference with simulated variables.…”
Section: Regularization Of Test Statisticsmentioning
confidence: 91%
“…Alternatively, one may employ a numerical estimator following Hong and Li (2018), but there are no data-driven procedures to date for selecting the step size (needed to carry out the numerical differentiation). This raises substantive concerns because the resulting bootstrap may be sensitive to the choice of the step size, as documented in Masten and Poirier (2021) and Chen and Fang (2019).…”
Section: Appendix C: the Special Casementioning
confidence: 99%
“…Given the lack of evidence of PS misspecification, an alternative reason for these conflicting results is that the unconfoundedness assumption does not hold in this particular application. Although such an assumption is not directly testable, the sensitivity analysis in Masten and Poirier (2017) suggests that this may be the case.…”
Section: Effect Of Child Soldiering On Future Earningsmentioning
confidence: 99%