Students with significant disabilities must participate in large-scale assessments, often using an alternate assessment judged against alternate achievement standards. The development and administration of this type of assessment must necessarily balance meaningful participation with accurate measurement. In this study, generalizability theory is used to estimate the dependability of reading items and tasks that have been administered using two formats of communication (receptive and expressive). The results reflect a trade-off between meaningful participation and accurate measurement of students with significant cognitive disabilities, particularly when considering the two formats. Significant variance is obtained for persons interacting with tasks, whereas the effect of raters is negligible. Furthermore, these results appear to vary across administrative format.
In this article, we highlight the need for a precisely defined construct in score-based validation and discuss the contribution of cognitive theories to accurately and comprehensively defining the construct. We propose a framework for integrating cognitively based theoretical and empirical evidence to specify and evaluate the construct. We apply this framework to an example case for division of fractions. Methods used to evaluate the example case included task analysis of the mathematical proof for the invert and multiply algorithm, interviews with content and pedagogical experts, verbal protocols, and item analyses based on pilot test data. The proposed framework, however, is intended to be generalized to other constructs.In educational and psychological measurement, validation is a process of collecting and evaluating an array of evidence to support the appropriateness of score-based interpretations and uses. For academic tests, data are commonly gathered to evaluate alignment between item content and tested standards; concordance between performance on similar and dissimilar measures; predictability of subsequent outcomes; and consistency of results across time, forms, raters, and other facets. Although measurement experts have proposed psychometric models and analytic procedures for gathering validity and reliability evidence, an essential feature of constructrelated evidence has received inadequate attention. Relying heavily on presupposed theories of learning, test developers frequently overlook the collection of evidence needed to document
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.