Professional development (PD) is a potentially important mechanism for enhancing classroom practices and children's learning. In this large-scale randomized controlled trial, we examined the effectiveness of language and literacy PD, with and without coaching, offered at scale to early childhood educators (n ϭ 546) across 1 state. Relative to the comparison condition, PD with coaching showed a small impact on the quantity of phonological awareness instruction, and PD with and without coaching impacted the quality of phonological awareness and writing instruction. PD did not impact children's (n ϭ 1,953, M age ϭ 4.53) emergent literacy skills, as measured by the research team, or kindergarten readiness, as measured by the state's kindergarten readiness assessment which exclusively focused on language and literacy skills. Although we can only speculate as to why this at-scale, state-sponsored PD did not realize intended impacts, these findings, as coupled with those from the literature, raise critical questions concerning current understandings of PD and the ability to achieve desired effects when implemented at scale.
Educational Impact and Implications StatementIn the field of education, professional development is intended to improve classroom instruction and children's learning. However, we have a limited understanding as to its effects, especially when used at scale with large numbers of educators. In this study, we examined the language and literacy professional development offered to early childhood educators by one state. We found that the professional development affected only a few aspects of classroom literacy instruction and did not affect young children's literacy learning. These results suggest that, in order to be effective, at-scale professional development may require greater attention to design and implementation and highlight the need for pilot work regarding effects of professional development prior to large-scale investments.
The authors used nationally based, random sample data from three different years (2009–2010, 2011–2012, and 2014–2015) for nearly 20,000 first‐grade students (n = 9,760, 3,657, and 3,121, respectively) to examine long‐reported inadequacies of a commonly used early literacy assessment tool, the Observation Survey of Early Literacy Achievement (OSELA), chief among them the skewness and nonequal interval nature of the scores that are obtained on its six individual tasks. Such inadequacies prevented the individual task scores from being used for program evaluation, screening, and progress monitoring. To mitigate these OSELA limitations, the authors employed Rasch analysis to create a scale that can be used to track a student's literacy achievement based on the combination of OSELA task scores. Dimensionality analyses revealed that the OSELA measures one factor, which supported the decision to combine the individual tasks to compute one total score. The equal‐interval total score was normally distributed at the beginning, middle, and end of first grade. Further, the authors conducted a predictive validation study of the total score to identify a range of cut scores that can be used in the fall of first grade to predict reading failure by year end. The authors maintain that the total score provides a more precise and efficient means of screening young students for reading failure and evaluating their progress over time. Implications for using the total score to make screening decisions and measure early reading progress are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.