2012
DOI: 10.1080/10627197.2012.715019
|View full text |Cite
|
Sign up to set email alerts
|

Validating Arguments for Observational Instruments: Attending to Multiple Sources of Variation

Abstract: Measurement scholars have recently constructed validity arguments in support of a variety of educational assessments, including classroom observation instruments. In this article, we note that users must examine the robustness of validity arguments to variation in the implementation of these instruments. We illustrate how such an analysis might be used to assess a validity argument constructed for the Mathematical Quality of Instruction instrument, focusing in particular on the effects of varying the rater poo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(20 citation statements)
references
References 29 publications
0
20
0
Order By: Relevance
“…Efforts invested in this pursuit are likely to benefit both the program and the students being evaluated as research studies have demonstrated that when validated tools are used for formative teacher evaluation and tailored feedback, they can successfully improve teaching quality (Allen, Pianta, Gregory, Mikami, & Lun, 2011;McCollum, Hemmeter, & Hsieh, 2011). However, if these tools are implemented poorly or inconsistently, the promised returns diminish (Hill et al, 2012).…”
Section: Sources Of Informationmentioning
confidence: 93%
“…Efforts invested in this pursuit are likely to benefit both the program and the students being evaluated as research studies have demonstrated that when validated tools are used for formative teacher evaluation and tailored feedback, they can successfully improve teaching quality (Allen, Pianta, Gregory, Mikami, & Lun, 2011;McCollum, Hemmeter, & Hsieh, 2011). However, if these tools are implemented poorly or inconsistently, the promised returns diminish (Hill et al, 2012).…”
Section: Sources Of Informationmentioning
confidence: 93%
“…Teacher ambitious instruction scores were generated from videotaped lessons of instruction captured over the course of 2 years. Teachers averaged 5.23 videotaped lessons (mode p 6), allowing for sufficient levels of predictive reliability (Hill, Charalambous, Blazar, et al 2012). 1 Raters had a background in mathematics or mathematics education, passed a certification exam, and completed ongoing calibrations.…”
Section: Classroom Observations and Achievementmentioning
confidence: 99%
“…The CEQ was catalyzed by the instruments named; however, the CEQ was specifically designed to address the temporal and fiscal limitations of observation research as well as the limitations of instructor self-reporting methods (Hill et al, 2012). The process of creating the CEQ involved an adaptation and transformation of constructs present on the cited protocols, such that selected items were translated into classroom practices that could be measured by student self-report as well as through observational methods.…”
Section: Higher Education Reform and Assessmentmentioning
confidence: 99%
“…A two-way mixed model was necessary because the study design called for data collection from a fixed group of observers and a random sample of students (Haber, Barnhart, Song, & Gruden, 2005;Hill et al, 2012;Shrout & Fleiss, 1979).…”
Section: Analytic Strategymentioning
confidence: 99%
See 1 more Smart Citation