2013
DOI: 10.1177/1534508413511488
|View full text |Cite
|
Sign up to set email alerts
|

Measuring Rater Reliability on a Special Education Observation Tool

Abstract: This study used generalizability theory to measure reliability on the Recognizing Effective Special Education Teachers (RESET) observation tool designed to evaluate special education teacher effectiveness. At the time of this study, the RESET tool included three evidence-based instructional practices (direct, explicit instruction; whole-group instruction; and discrete trial teaching) as the basis for special education teacher evaluation. Five raters participated in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
22
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(24 citation statements)
references
References 43 publications
1
22
0
1
Order By: Relevance
“…Larger research studies, like those included in the Measures of Effective Teaching (MET) project, may include school administrators as raters but fail to include teachers of SWDs in the samples. Studies that do include special education teachers (e.g., Semmelroth & Johnson, 2013) have included peer raters but have not utilized school administrators as raters of the teachers' instruction. Understanding rater characteristics and the conditions surrounding scoring is critical to ensuring that all teachers receive fair evaluations, but it is even more critical for special education teachers, who are often evaluated by administrators who may not possess formal training or have expertise in the area of special education (Sledge & Pazey, 2013).…”
Section: Purpose Of the Studymentioning
confidence: 99%
See 2 more Smart Citations
“…Larger research studies, like those included in the Measures of Effective Teaching (MET) project, may include school administrators as raters but fail to include teachers of SWDs in the samples. Studies that do include special education teachers (e.g., Semmelroth & Johnson, 2013) have included peer raters but have not utilized school administrators as raters of the teachers' instruction. Understanding rater characteristics and the conditions surrounding scoring is critical to ensuring that all teachers receive fair evaluations, but it is even more critical for special education teachers, who are often evaluated by administrators who may not possess formal training or have expertise in the area of special education (Sledge & Pazey, 2013).…”
Section: Purpose Of the Studymentioning
confidence: 99%
“…The special education teacher participants from Idaho were selected from an existing database of 21 teachers who had contributed video data files of their teaching for a previous study (Semmelroth & Johnson, 2013), and provided consent for their video data to be used in future research. The video data files for the Idaho participants were captured using the Teachscape video capture system and stored in the Teachscape secure online database.…”
Section: Participantsmentioning
confidence: 99%
See 1 more Smart Citation
“…Most empirically based observation tools were developed for monitoring implementation in general education (e.g., Danielson, 2013; La Paro, Pianta, & Stuhlman, 2004) and may not measure qualities of instruction known to be essential in intervention (Holdheide, Goe, Croft, & Reschly, 2010; Johnson & Semmelroth, 2012; Semmelroth & Johnson, 2014). Not only are most current tools not designed for intervention, as Knight (2007) noted, finding the personnel and time for monitoring implementation is also a challenge.…”
mentioning
confidence: 99%
“…Zaslow et al, 2011). Semmelroth and Johnson (2014) noted that efforts focused on developing and applying these types of measures of effective teaching have often 669924T ECXXX10.1177/0271121416669924Topics in Early Childhood Special EducationMcLaughlin et al…”
mentioning
confidence: 99%