2014
DOI: 10.1002/pam.21764
|View full text |Cite
|
Sign up to set email alerts
|

What Works Best and When: Accounting for Multiple Sources of Pureselection Bias in Program Evaluations

Abstract: Most evaluations are still quasi-experimental and most recent quasi-experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better-matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 41 publications
(90 reference statements)
0
4
0
Order By: Relevance
“…More than 70 WSCs have been conducted to date, including at least three in this journal (Bifulco, ; Wilde & Hollister, ; Wing & Cook, ). Two other studies in this journal used somewhat comparable methods (Jung & Pirog, , ). Most WSCs test how effective design and analysis methods are for reducing the population differences that cloud the causal interpretation of nonexperimental data.…”
Section: Introductionmentioning
confidence: 99%
“…More than 70 WSCs have been conducted to date, including at least three in this journal (Bifulco, ; Wilde & Hollister, ; Wing & Cook, ). Two other studies in this journal used somewhat comparable methods (Jung & Pirog, , ). Most WSCs test how effective design and analysis methods are for reducing the population differences that cloud the causal interpretation of nonexperimental data.…”
Section: Introductionmentioning
confidence: 99%
“…While we cannot mitigate or even test for self‐selection bias directly, we conduct one supplemental analysis to verify our results. Our methodological strategy may also partially pre‐empt this concern, as Jung and Pirog (2014) show that fixed effect estimators reduce bias arising from self‐selection.…”
Section: Methodsmentioning
confidence: 99%
“…Large governments that select into training likely possess greater financial resources and increased administrative capacity compared with smaller governments that did not participate. For this reason, it is likely that the analytical results would be confounded by self-selection bias absent the assumptions inherent in a DD design, although recent methodological work suggests that the inclusion of fixed effects helps to mitigate this bias (Jung & Pirog, 2014). We separately address this concern by enforcing a region of common support containing "treated" governments that participated in training and comparable governments that did not.…”
Section: Methodsmentioning
confidence: 99%