Students with disabilities have been included in state accountability systems for more than a decade; however, only in the past few years have alternate assessments of alternate achievement standards (AA-AAS) become stable enough to allow examination of these students' achievement growth. Using data from Oregon's AA-AAS in Reading during the period , we examined the achievement growth for a sample of 1,061 elementary students using two growth models: a transition matrix and a multilevel linear growth model. The authors found with the transition matrix model that a majority of students remained at the same performance level from one year to the next, whereas with the multilevel linear growth model, students' scores revealed small, but statistically meaningful, growth year to year. The article concludes by noting advantages and disadvantages of these models to characterize growth and their implications for policy and practice.
Students with significant cognitive disabilities are eligible to participate in two statewide testing options for accountability: alternate assessments or general assessments with appropriate accommodations. Participation guidelines are generally quite vague, leading to students "switching" test participation between years. In this study, we tracked test participation for two cohorts of students with a documented disability over 3 years. Results suggested approximately 25% of students who initially took the alternate assessment switched test type at least once, although patterns of switching were not consistent across disabilities. Students on the performance "bubble" were more likely to switch test participation. Test switching poses challenges for monitoring students' academic growth within accountability frameworks.
Students with significant cognitive disabilities present an assessment dilemma that centers on access and validity in largescale testing programs. Typically, access is improved by eliminating construct-irrelevant barriers, while validity is improved, in part, through test standardization. In this article, one state's alternate assessment data were analyzed to determine the impact of (a) administration supports based on students' level of independence and (b) a scaffold test administration format. Using structural equation modeling, we tested the extent to which students' level of independence mediated the relation between disabilities and latent content knowledge scores. We then tested the invariance of the measurement model across administration formats. The results provide evidence that these supports help students access the test without compromising the validity of test-based inferences.
Modeling growth for students with significant cognitive disabilities (SWSCD) is difficult due to a variety of factors, including, but not limited to, missing data, test scaling, group heterogeneity, and small sample sizes. These challenges may account for the paucity of previous research exploring the academic growth of SWSCD. Our study represents a unique context in which a reading assessment, calibrated to a common scale, was administered statewide to students in consecutive years across Grades 3 to 5. We used a nonlinear latent growth curve pattern-mixture model to estimate students’ achievement and growth while accounting for patterns of missing data. While we observed significant intercept differences across disability subgroups, there were no significant slope differences. Incorporating missing data patterns into our models improved model fit. Limitations and directions for future research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.