2003
DOI: 10.1177/0002716203254764
|View full text |Cite
|
Sign up to set email alerts
|

Why have Educational Evaluators Chosen Not to Do Randomized Experiments?

Abstract: ARTICLE 589 September THE ANNALS OF THE AMERICAN ACADEMY EDUCATIONAL EVALUATORS AND RANDOMIZED EXPERIMENTSThis article analyzes the reasons that have been adduced within the community of educational evaluators for not doing randomized experiments. The objections vary in cogency. Those that have most substance are not insurmountable, however, and strategies are mentioned for dealing with them. However, the objections are serious enough, and the remedies partial enough, that it seems hardly warranted to call exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
51
0
2

Year Published

2007
2007
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(54 citation statements)
references
References 66 publications
1
51
0
2
Order By: Relevance
“…Perhaps practice may not be interested in philosophy of science; however, recommendations of a program without specified conditions may lead to disappointment when a model program is re-implemented without success. Specification of relevant conditions is essential in practice and its lack may be one reason why scholars argue against randomized experiments (Cook 2003).…”
Section: Replication In Developmental Preventionmentioning
confidence: 99%
“…Perhaps practice may not be interested in philosophy of science; however, recommendations of a program without specified conditions may lead to disappointment when a model program is re-implemented without success. Specification of relevant conditions is essential in practice and its lack may be one reason why scholars argue against randomized experiments (Cook 2003).…”
Section: Replication In Developmental Preventionmentioning
confidence: 99%
“…24 An analysis of the reasons for not adopting the RCT design concludes that, despite serious practical objections and partial remedies, RCTs are logically and empirically superior to all currently known alternatives. 25 The view that the RCT is inappropriate to test the success of policy interventions is refuted by a bibliometric analysis, which concludes that between 6% and 15% of impact evaluations of childhood interventions in education and justice employ a randomised design. 26 Our own experience of conducting RCTs supports their use for evaluating social interventions.…”
Section: Evaluating Public Policy Interventionsmentioning
confidence: 99%
“…Similarly, divergent views about appropriate methods for evaluating interventions are found in the areas of social welfare 44 and education 25 where The Centre for Evidence-Based Social Services (www.ex.ac.uk/cebss/introduction.html) and the Evidence-based (www.cemcentre.org/ebeuk/) Education Network UK stand out from many of their British professional colleagues in social welfare and education, respectively, as advocates for randomised evaluation.…”
Section: Chaptermentioning
confidence: 99%
“…Even if we acknowledge that randomised controlled trials (RCTs) are not always appropriate or possible, it is almost certainly the case that they could be used more often than they are (Torgerson and Torgerson, 2007;Cook, 2003;Slavin, 2008). Furthermore, there are non-randomised designs that are considerably stronger than the ones used in the examples here: for example, regression-discontinuity and time-series designs (Shadish et al, 2002;Cook et al, 2008).…”
Section: Recommendations For Researchmentioning
confidence: 99%