2002
DOI: 10.1016/s1098-2140(02)00234-5
|View full text |Cite
|
Sign up to set email alerts
|

Evaluations of After-School Programs: A Meta-Evaluation of Methodologies and Narrative Synthesis of Findings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
1

Year Published

2004
2004
2020
2020

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(48 citation statements)
references
References 20 publications
0
47
1
Order By: Relevance
“…First, the ASP can modify students' classroom misconduct, reducing disruptions that affect their learning or that of their classmates. (Scott-Little et al, 2002;Durlak et al, 2010). This improved learning environment will therefore benefit both treated and nontreated children.…”
Section: Measuring the Overall Asp's Impactmentioning
confidence: 99%
“…First, the ASP can modify students' classroom misconduct, reducing disruptions that affect their learning or that of their classmates. (Scott-Little et al, 2002;Durlak et al, 2010). This improved learning environment will therefore benefit both treated and nontreated children.…”
Section: Measuring the Overall Asp's Impactmentioning
confidence: 99%
“…In one study of evaluations of after‐school programmes in the USA, the authors concluded that most suffered from severe reliability and validity problems. Implementation was only fully studied in a minority of studies, and where this occurred, only a very small number of evaluations used direct observation methods (Scott‐Little et al , 2002).…”
Section: Introductionmentioning
confidence: 99%
“…Examples of evaluation designs include descriptive (e.g., case study, observational), correlational (e.g., cohort study, cross-sectional study), quasi-experimental (e.g., nonequivalent control groups design, regression discontinuity design), experimental (i.e., experiment with true randomization), and meta-analysis designs (Crano & Brewer, 2002). There has been a debate within the field of evaluation on what constitutes credible evidence (Donaldson, Christie, & Mark, 2009), with some evaluators arguing for RCTs as the "gold standard" (Petrosino, 2003;Scott-Little, Hamann, & Jurs, 2002) and others questioning the superiority of the RCT (Jacobs, 2004;McCall & Green, 2004). In particular, evaluators of youth programs have argued that RCTs may not always fit the program or the program's need for evaluation (McCall & Green, 2004).…”
Section: Evaluation Designmentioning
confidence: 99%