2019
DOI: 10.1057/s41287-019-00202-w
|View full text |Cite
|
Sign up to set email alerts
|

Towards Appropriate Impact Evaluation Methods

Abstract: The choice of evaluation methods is one of the most plagued questions for evaluators (Szanyi, Azzam, & Galen, 2012). Especially in development evaluation, where interventions tend to be very complex, and multiple stakeholders hold competing interests (Holvoet et al., 2018), this question is pressing. While one can discern an emerging consensus among evaluation scholars that not only (quasi-) experimental evidence can lay monopoly claim to the production of the best effectiveness evidence (Stern et al., 2012), … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Experimental designs such as Randomized Controlled Trials (RCTs), which rely on counterfactual comparisons between situations with ("policy on") and without the intervention ("policy off"), are particularly suited for this objective. While experimental evidence can be very insightful for policy makers, particularly for accountability purposes (Pattyn 2019), not all policy settings lend themselves to the application of RCTs. Importantly, the "policy works" claim relies on the assumption that the intervention is the primary cause of the effect of interest (Stern et al 2012, p. 38).…”
Section: Different Conceptualizations Of Policy Effectivenessmentioning
confidence: 99%
“…Experimental designs such as Randomized Controlled Trials (RCTs), which rely on counterfactual comparisons between situations with ("policy on") and without the intervention ("policy off"), are particularly suited for this objective. While experimental evidence can be very insightful for policy makers, particularly for accountability purposes (Pattyn 2019), not all policy settings lend themselves to the application of RCTs. Importantly, the "policy works" claim relies on the assumption that the intervention is the primary cause of the effect of interest (Stern et al 2012, p. 38).…”
Section: Different Conceptualizations Of Policy Effectivenessmentioning
confidence: 99%
“…For the administration in charge of the implementation of the evaluations announced in the policy notes and the evaluation community at large, our findings can be read as an incentive to engage in policy evaluations that are not primarily accountability focused, but that also enable policy learning. In fact, not all evaluation methods lend themselves to policy learning (Pattyn 2019). This is not to say, on the other hand, that parliamentarians cannot use learning-oriented evaluations to hold ministers accountable (Speer et al 2015;Bundi 2016).…”
Section: Resultsmentioning
confidence: 99%
“…There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [75], as well as political considerations, such as the credibility of the approach with key stakeholders [76]. There are inevitably 'horses for courses' [77].…”
Section: Implications For Conducting and Reporting Qca Studiesmentioning
confidence: 99%