2019
DOI: 10.1080/1743727x.2019.1657081
|View full text |Cite
|
Sign up to set email alerts
|

A new methodological approach for evaluating the impact of educational intervention implementation on learning outcomes

Abstract: Randomized control trials (RCTs) are commonly regarded as the 'gold standard' for evaluating educational interventions. While this experimental design is valuable in establishing causal relationships between the tested intervention and outcomes, reliance on statistical aggregation typically underplays the situated context in which interventions are implemented. Developing innovative, systematic methods for evaluating implementation and understanding its impact on outcomes is vital to moving educational evaluat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(23 citation statements)
references
References 55 publications
0
22
0
1
Order By: Relevance
“…One of the most pressing needs is to evaluate the impact of magic-based interventions by comparing their effect to that of a control group or another pedagogical intervention. There exists a large literature on how to develop and test such interventions within educational settings (see, e.g., Wagner, 1997;Outhwaite, Gulliford & Pitchford, 2019), and many of the debates and designs within that literature are applicable to the work reviewed in this paper. Second, future researchers might also consider employing ecologically valid variables, including those relating to knowledge retention (e.g., exam results and test scores), attention and engagement (e.g., class attendance, question-asking and eye tracking), attitudinal shifts (e.g., beliefs about science, pseudoscience and the paranormal) and emotional impact (e.g., validated mood questionnaires).…”
Section: Discussionmentioning
confidence: 99%
“…One of the most pressing needs is to evaluate the impact of magic-based interventions by comparing their effect to that of a control group or another pedagogical intervention. There exists a large literature on how to develop and test such interventions within educational settings (see, e.g., Wagner, 1997;Outhwaite, Gulliford & Pitchford, 2019), and many of the debates and designs within that literature are applicable to the work reviewed in this paper. Second, future researchers might also consider employing ecologically valid variables, including those relating to knowledge retention (e.g., exam results and test scores), attention and engagement (e.g., class attendance, question-asking and eye tracking), attitudinal shifts (e.g., beliefs about science, pseudoscience and the paranormal) and emotional impact (e.g., validated mood questionnaires).…”
Section: Discussionmentioning
confidence: 99%
“…The fidelity instrument required coders to record concrete, observable behaviors reflective of curriculum implementation. Codes were designed to capture both dosage (i.e., how much time of the day was spent delivering the curriculum) and adherence (i.e., whether the structure and sequence of curriculum activities were followed, see Outhwaite et al, 2019). Because codes explicitly tapped the complexity of the curriculum, it was necessary for coders to be highly familiar with the curriculum itself.…”
Section: Methodsmentioning
confidence: 99%
“…Interview guides for school staff and study facilitators were derived from the logic model, CFIR online resources (e.g., https://c rguide.org/evaluation-design/qualitative-data/), and the broader literature investigating the delivery of interventions in school settings (44,45; see Additional File 1 for interview guides). Interviews will provide information about both implementation predictors and outcomes.…”
Section: Individual Interviewsmentioning
confidence: 99%