principles suggest that providing learners multiple means for engagement, representation, and action and expression will help learners become purposeful and motivated, resourceful and knowledgeable, and strategic and goal-directed (CAST, 2018). The purpose of this study was to explore the challenges and opportunities of adopting UDL principles for online course design using the decisionmaking process as the theoretical framework as defined by the Diffusion of Innovation theory (Rogers, 2003). Seven online faculty were interviewed regarding the challenges and opportunities that hindered or helped their decision to adopt the UDL principles in online course design. Additionally, three faculty participants volunteered course materials as examples of how they applied UDL principles. Results highlight ways institutions of higher education can promote faculty adoption of UDL principles for online course design.
There are few empirical studies of teacher performance evaluation systems. Teachers are rightfully concerned about the degree to which evaluators’ idiosyncratic biases might undermine the process. Training evaluators thoroughly and monitoring the reliability, validity, fairness, and cultural sensitivity of their ratings are essential steps towards promoting strong performance evaluation systems. This study examined the process of evaluating early childhood teachers to inform evaluator training. The researchers sought to determine the degree to which the expectations of those who develop training materials and conduct evaluator trainings differ from the typical performance ratings given by evaluators in the field. Researchers used several methods to prompt a systematic examination of the evaluator training process across four sequential phases of investigation: (a) quantitative panel ratings of item difficulty, (b) panel discussion and consensus building (a qualitative phase), (c) examining expected versus empirical item difficulty (a quantitative phase), and (d) presenting the empirical difficulty levels to the panel for discussion (a qualitative phase). In this last phase, researchers presented results of Rasch modeling to the panel, along with levels of agreement between the empirical and expected difficulty levels. Panel members reported that the process of discussing their perceptions of expected item difficulty levels was valuable. They also reported that such discussion prompted them to reevaluate the training materials, the resource manuals, and other professional development resources. The study methods presented can be used to investigate and to improve other personnel evaluation systems.
In this study, various statistical indexes of agreement were calculated using empirical data from a group of evaluators (n = 45) of early childhood teachers. The group of evaluators rated ten fictitious teacher profiles using the North Carolina Teacher Evaluation Process (NCTEP) rubric. The exact and adjacent agreement percentages were calculated for the group of evaluators. Kappa, weighted Kappa, Gwet’s AC1, Gwet’s AC2, and ICCs were used to interpret the level of agreement between the group of raters and a panel of expert raters. Similar to previous studies, Kappa statistics were low in the presence of high levels of agreement. Weighted Kappa and Gwet’s AC1 were less conservative than Kappa values. Gwet’s AC2 statistic was not defined for most evaluators, as there was an issue found with the statistic when raters do not use each category on the rating scale a minimum number of times. Overall, summary statistics for exact agreement were 68.7% and 87.6% for adjacent agreement across 2,250 ratings (45 evaluators ratings of ten profiles across five NCTEP Standards). Inter-rater agreement coefficients varied from .486 for Kappa, .563 for Gwet’s AC1, .667 for weighted Kappa, and .706 for Gwet’s AC2. While each statistic yielded different results for the same data, the inter-rater reliability of evaluators of early childhood teachers was acceptable or higher for the majority of this group of raters when described with summary statistics and using precise measures of inter-rater reliability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.