2017
DOI: 10.3102/0002831217726522
|View full text |Cite
|
Sign up to set email alerts
|

Validating a Fidelity Scale to Understand Intervention Effects in Classroom-Based Studies

Abstract: K–12 intervention studies often include fidelity of implementation (FOI) as a mediating variable, though most do not report the validity of fidelity measures. This article discusses the critical need for validated FOI scales. To illustrate our point, we describe the development and validation of the Implementation Validity Checklist (IVC-R), an observation tool for measuring FOI of a research-based instructional reading approach, Collaborative Strategic Reading. Following Kane (2006), Wilson (2004), and the gu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 93 publications
0
7
0
Order By: Relevance
“…We used a coding document that enabled observers to measure two dimensions of treatment fidelity: treatment adherence and instructional quality. Vaughn and colleagues (2013) initially developed the coding document by modifying the Implementation Validity Checklist–Revised (IVC‐R), an observation tool that measures treatment fidelity of Collaborative Strategic Reading (another middle‐grades reading approach) with strong evidence of reliability and validity (Buckley et al, 2017). In modifying the IVC‐R to measure PACT, Vaughn et al kept many of the key elements of the IVC‐R the same: (a) Treatment adherence and instructional quality were measured, (b) several adherence items were used to score adherence to each of the major program components, (c) the rubrics for scoring instructional quality were unchanged, and (d) the Likert‐type scale ranges stayed the same.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We used a coding document that enabled observers to measure two dimensions of treatment fidelity: treatment adherence and instructional quality. Vaughn and colleagues (2013) initially developed the coding document by modifying the Implementation Validity Checklist–Revised (IVC‐R), an observation tool that measures treatment fidelity of Collaborative Strategic Reading (another middle‐grades reading approach) with strong evidence of reliability and validity (Buckley et al, 2017). In modifying the IVC‐R to measure PACT, Vaughn et al kept many of the key elements of the IVC‐R the same: (a) Treatment adherence and instructional quality were measured, (b) several adherence items were used to score adherence to each of the major program components, (c) the rubrics for scoring instructional quality were unchanged, and (d) the Likert‐type scale ranges stayed the same.…”
Section: Methodsmentioning
confidence: 99%
“…A review identified 43 studies examining the relation between treatment fidelity and student reading outcomes (van Dijk, Lane, & Gage, 2021), most of which were published in the last decade. At least 10 intervention studies measured multiple dimensions of treatment fidelity and used these data to analyze the dimensions’ influence on student outcomes in reading (Benner, Nelson, Stage, & Ralston, 2011; Boardman et al, 2016; Buckley, Moore, Boardman, Arya, & Maul, 2017; Domitrovich, Gest, Jones, Gill, & DeRousie, 2010; Fogarty et al, 2014; Guo et al, 2016; Hamre et al, 2010; Justice et al, 2009; Mendive, Weiland, Yoshikawa, & Snow, 2016; Vadasy & Sanders, 2009; Vaughn et al, 2015). The results of these fidelity analyses help explain why reading interventions succeed or fall short and the implementation factors associated with student gains in reading.…”
mentioning
confidence: 99%
“…Despite the widely accepted expectation that teachers will vary in their quality of delivery, theories of change for school interventions rarely depict branching paths that may explain the conditions under which only some students benefit from interventions (Zvoch, 2012). This is reasonable given that there is still much we do not know, such as how best to measure quality of delivery and test its association with student outcomes (e.g., Hill & Erickson, 2019;Humphrey, Barlow, & Lendrum, 2018;Buckley, Moore, Boardman, Arya, & Maul, 2017). We HOW TO MEASURE QUALITY OF DELIVERY argue that a lack of recommendations in the literature on how to measure quality of delivery contributes to why it is rarely assessed and analyzed in school intervention studies (Hill & Erickson, 2019).…”
Section: How To Measure Quality Of Deliverymentioning
confidence: 99%
“…Quality of delivery refers to "how well" interventions are implemented (Solmeyer & Constance, 2015). In the context of school interventions, quality of delivery is often conceptualized as teaching practices displayed during the delivery of an intervention that HOW TO MEASURE QUALITY OF DELIVERY promote student outcomes (Buckley, Moore, Boardman, Arya, & Maul, 2017;Warren, Domitrovich, & Greenberg, 2009). For example, students' understanding of the core ideas in a lesson (i.e., outcome) may be enhanced when teachers ask open-ended questions and engage students in discussion (i.e., teaching practices).…”
Section: What Do We Know About Quality Of Delivery?mentioning
confidence: 99%
“…To address these points of critique, researchers have employed a variety of design and analytic strategies. For example, Buckley and colleagues (2017) conducted a psychometric analysis of their implementation fidelity measure of a comprehensive reading intervention to ensure its reliability and validity. During the validation process, they adapted the measure to better show relations between core components and student outcomes.…”
Section: Implementation Fidelity Indicesmentioning
confidence: 99%