Growth-based approaches to federal accountability are receiving considerable attention because they have the potential to reward schools and teachers for improving student performance over time by measuring student progress at all levels of the performance spectrum, including progress by students who have not reached proficiency on state accountability assessments. The use of growth in accountability holds promise for students with disabilities, but measuring changes over time in academic performance with large-scale annual assessments is complex. The authors discuss practical challenges in measuring and modeling growth for students with disabilities. In addition, they identify and describe three areas in need of research on the measurement of growth: the impact of testing accommodations, the impact of test difficulty, and the longitudinal characteristics of the population of students with disabilities.
This paper is the second in a series from Educational Testing Service (ETS) that conceptualizes next‐generation English language proficiency (ELP) assessment systems for K‐12 English learners (ELs) in the United States. The first paper articulated a high‐level conceptualization of next‐generation ELP assessment systems (Hauck, Wolf, & Mislevy, 2016), the third paper addressed issues related to summative ELP assessments that emerged from the presentations and discussions at the English Language Proficiency Assessment Research working meeting (Wolf, Guzman‐Orth, & Hauck, 2016), and the fourth paper focused on a key concern within such systems—the initial identification and classification of ELs (Lopez, Pooler, & Linquanti, 2016). The goal of this paper is to address accessibility issues in the context of ELP assessments and to discuss critical considerations to improve the accessibility of ELP assessments for ELs and ELs with disabilities. Although accessibility for ELs and ELs with disabilities who are taking content assessments is also important, a discussion about content assessments is beyond the scope of the paper at this time. In this paper, we discuss challenges and areas of possible directions to pursue for ongoing and future ELP assessment development, policy implications, and research considerations to improve the ELP testing experience for all users.
This validity study examined differential item functioning (DIF) results on large-scale state standards-based English-language arts assessments at grades 4 and 8 for students without disabilities taking the test under standard conditions and students who are blind or visually impaired taking the test with either a large print or braille form. Using the Mantel-Haenszel method, only one item at each grade was flagged as displaying large DIF, in each case favoring students without disabilities. Additional items were flagged as exhibiting intermediate DIF, with some items found to favor each group. A priori hypothesis coding and attempts to predict the effects of large print or braille accommodations on DIF were not found to have a relationship with the actual flagging of items, although some a posteriori explanations could be made. The results are seen as supporting the accessibility and validity of the current test for students who are blind or visually impaired while also identifying areas for improvement consisting mainly of attention to formatting and consistency.
This study examined whether distractor choices functioned differently for students without learning disabilities than they functioned for students with learning disabilities who received no accommodation, students with learning disabilities who received a read-aloud accommodation, and students with learning disabilities who received some form of accommodation other than read-aloud. The study's purpose was twofold: (a) to examine the results of the DDF analysis to determine whether the distractors functioned differently for the various groups of students and (b) to aid in determining whether the test may be modified for those students with learning disabilities by removing a distractor while maintaining adequate test validity and information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.