2012
DOI: 10.1111/j.1745-3984.2012.00166.x
|View full text |Cite
|
Sign up to set email alerts
|

Scaling, Linking, and Reporting in a Periodic Assessment System

Abstract: A new entry in the testing lexicon is through‐course summative assessment, a system consisting of components administered periodically during the academic year. As defined in the Race to the Top program, these assessments are intended to yield a yearly summative score for accountability purposes. They must provide for both individual and group proficiency estimates and allow for the measurement of growth. They must accommodate students who vary in their patterns of curricular exposure. Because they are meant t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…The assessments would be designed to be engaging and to be valuable learning experiences in and of themselves. Because there would be many of them, they would be administered at various times throughout the school year, with the results aggregated within-student for purposes of representing achievement (much as teachers aggregate evidence to award an individual’s course grade, or the major leagues aggregate wins and losses to determine which team goes to the championships; see Mislevy & Zwick, 2012, for more on the significant psychometric issues attendant to such aggregation). Also due to their number, each assessment would have considerably less influence, in contrast to the onetime test of today, eliminating the problem of a student or class having a “bad day.” These new assessments might also incidentally provide tentative formative results, pointing teachers toward a student’s placement in a learning progression and how that student’s problem-solving processes might be improved, results to be followed up with more targeted, teacher-directed data gathering.…”
Section: An Elaboration Of Third-generation Assessment Themesmentioning
confidence: 99%
“…The assessments would be designed to be engaging and to be valuable learning experiences in and of themselves. Because there would be many of them, they would be administered at various times throughout the school year, with the results aggregated within-student for purposes of representing achievement (much as teachers aggregate evidence to award an individual’s course grade, or the major leagues aggregate wins and losses to determine which team goes to the championships; see Mislevy & Zwick, 2012, for more on the significant psychometric issues attendant to such aggregation). Also due to their number, each assessment would have considerably less influence, in contrast to the onetime test of today, eliminating the problem of a student or class having a “bad day.” These new assessments might also incidentally provide tentative formative results, pointing teachers toward a student’s placement in a learning progression and how that student’s problem-solving processes might be improved, results to be followed up with more targeted, teacher-directed data gathering.…”
Section: An Elaboration Of Third-generation Assessment Themesmentioning
confidence: 99%
“…Measuring growth is particularly problematic because higher or lower performance depends on the particular language uses, topic, process, and cross‐cutting theme experiences a student has had before and between time points . Multidimensional measurement models can help sort out the differing profiles across students (Jang, ; Mislevy & Zwick, ), but even so, the tasks provide far less information about individual students than when contextualization and conditional inference can be employed. This is a design tradeoff between broadly comparable evidence and individually useful evidence.…”
Section: Implications For Assessmentmentioning
confidence: 99%
“…von Davier, Xu, and Carstensen () used a general latent variable model to measure growth for the Programme for International Student Assessment (PISA). Mislevy and Zwick () discussed the scaling and linking issues in reporting the results of a periodic assessment system. Some growth models assessed growth by the changes of item parameters (Doran & Jiang, ; Kaplan & Sweetman, ; Reckase, ; von Davier, ), for example, the Embretson model (Embretson, ; Xu & Qian, ).…”
mentioning
confidence: 99%