2010
DOI: 10.1007/s11292-010-9095-3
|View full text |Cite
|
Sign up to set email alerts
|

When second best is good enough: a comparison between a true experiment and a regression discontinuity quasi-experiment

Abstract: In this paper, we compare the results from a randomized clinical trial to the results from a regression discontinuity quasi-experiment when both designs are implemented in the same setting. We find the that results from the two approaches are effectively identical. We * Geoffrey Barnes' work on this project was funded in part by a grant from the Smith Richardson Foundation to the Jerry Lee Center of Criminology at the University of Pennsylvania. For this support we are grateful. Thanks also go to Larry Sherman… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
63
0
1

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 76 publications
(65 citation statements)
references
References 37 publications
1
63
0
1
Order By: Relevance
“…We might offer that in such types of limited datasets that scholars consider alternative strategies for causal identification which do not rely on the selection on observables assumption, such as instrumental variables (Angrist 2006) or regression discontinuity designs (Berk et al 2010). Our results also would seem to support the belief that propensity score methods will work best on datasets which include a rich array of pre-intervention measures which can be used to establish equivalence over a wide range of items across multiple domains.…”
Section: Discussionmentioning
confidence: 57%
“…We might offer that in such types of limited datasets that scholars consider alternative strategies for causal identification which do not rely on the selection on observables assumption, such as instrumental variables (Angrist 2006) or regression discontinuity designs (Berk et al 2010). Our results also would seem to support the belief that propensity score methods will work best on datasets which include a rich array of pre-intervention measures which can be used to establish equivalence over a wide range of items across multiple domains.…”
Section: Discussionmentioning
confidence: 57%
“…Six studies provided us with standardized RD bias estimates directly or with the pooled standard deviation of the original study outcome necessary to calculate it directly (Aiken et al., ; Gleason, Resch, & Berk, ; Kisbu‐Sakarya, Cook, & Tang, in press; Nickerson, ; Shadish et al., ; Tang, Cook, & Kisbu‐Sakarya, forthcoming). Four studies used binary outcomes (Barrera‐Osorio, Filmer, & McIntyre, ; Berk et al., ; Green et al., ; Hyytinen et al., ). While it is common to use log odds ratios for binary outcomes in meta‐analyses, for ease of interpretation we stuck with mean differences of the binary outcomes and calculated treatment and control group standard deviations for them as the square roots of their variances.…”
Section: Methods For Meta‐analyzing Wsc Estimatesmentioning
confidence: 99%
“…condition, whenever her/his observed value of the assignment variable equals or exceeds (resp. As proven in Goldberger (2008), the RDD can empirically produce causal effect estimates that are similar to those estimates of a standard randomized study (Aiken et al 1998;Buddelmeyer and Skoufias 2004;Black et al 2005;Schochet 2009;Berk et al 2010;Shadish et al 2011). Under relatively mild conditions, notably when subjects have imperfect control of the assignment variable, the RDD provides a "locally-randomized experiment."…”
Section: Discussionmentioning
confidence: 79%