In response to the existing accountability movement in the United States, a plethora of educational policies and standards have emerged at various levels to promote teacher assessment competency, with a focus on preservice assessment education. However, despite these policies and standards, research has shown that beginning teachers continue to maintain low competency levels in assessment. Limited assessment education that is potentially misaligned to assessment standards and classroom practices has been identified as one factor contributing to a lack of assessment competency. Accordingly, the purpose of this study was to analyze the alignment between teacher education accreditation policies, professional standards for teacher assessment practice, and preservice assessment course curriculum. Through a curriculum alignment methodology involving two policy documents, two professional standards documents, and syllabi from 10 Florida-based, Council for Accreditation of Teacher Education–certified teacher education programs, the results of this study serve to identify points of alignment and misalignment across policies, standards, and curricula. The study concludes with a discussion on the current state of assessment education with implications for enhancing teacher preparation in this area and future research on assessment education.
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M plus 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data included population variance estimates, which were zero or near zero for two of the five random slopes. Generally, when yielding admissible solutions, all five software packages produced comparable and reasonably unbiased parameter estimates. However, noticeable differences among the five packages arose in terms of speed, convergence rates, and the production of standard errors for random effects, especially when the variances of these effects were zero in the population. The results of this study suggest that applied researchers should carefully consider which random effects they wish to include in their models. In addition, nonconvergence rates vary across packages, and models that fail to converge in one package may converge in another.
The authors describe a process of self-assessment attuned to equity and justice in the policies and practices that affect student diversity, namely, those associated with the selection of candidates. The disproportionate rate of rejection for applicants from underrepresented groups and the unsystematic process of applicant selection operated as hidden curriculum affecting the opportunities for the program to enhance meaningful relationships among diverse groups of students. The authors describe institutional and sociopolitical conditions, and individual actions reflecting a faculty's will to policy. Faculty efforts supported and challenged systemic change to increase racial and ethnic diversity among aspiring educational administrators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.