2020
DOI: 10.1111/emip.12396
|View full text |Cite
|
Sign up to set email alerts
|

Score Reporting for Examinees with Incomplete Data on Large‐Scale Educational Assessments

Abstract: Technical difficulties occasionally lead to missing item scores and hence to incomplete data on computerized tests. It is not straightforward to report scores to the examinees whose data are incomplete due to technical difficulties. Such reporting essentially involves imputation of missing scores. In this paper, a simulation study based on data from three educational tests is used to compare the performances of six approaches for imputation of missing scores. One of the approaches, based on data mining, is the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
30
1
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

5
3

Authors

Journals

citations
Cited by 12 publications
(35 citation statements)
references
References 56 publications
2
30
1
2
Order By: Relevance
“…Importantly, the missingness process is summarized by the latent response variable. As an alternative, multiple imputation at the level of items can be employed to handle missing item responses properly [ 22 , 23 ]. However, scoring missing item responses as wrong could be defended for validity reasons [ 24 , 25 , 26 ].…”
Section: Introductionmentioning
confidence: 99%
“…Importantly, the missingness process is summarized by the latent response variable. As an alternative, multiple imputation at the level of items can be employed to handle missing item responses properly [ 22 , 23 ]. However, scoring missing item responses as wrong could be defended for validity reasons [ 24 , 25 , 26 ].…”
Section: Introductionmentioning
confidence: 99%
“…In this article, we show how these recent features of test data motivate the need for a longitudinal approach to examining data quality. Though multiple approaches exist to address missing data within a tested population (e.g., Peugh & Enders, 2004) and within individual item response vectors at a single test occasion (e.g., Sinharay, 2021), a key inference for test scores, especially in a pandemic, is about educational progress or decline for populations over time. The Standards for Educational and Psychological Testing (American Educational Research Association et al., 2014) stress the importance of describing the target population and collecting information on testing conditions for assessing validity evidence.…”
Section: Assessment Reliability Standard Errors Of Measurement Item F...mentioning
confidence: 99%
“…Huisman and Molenaar (2001) compared several missing-data imputation approaches for imputation of the total/raw score in psychology tests in the presence of missing item scores. Sinharay (2021) compared several missing-data imputation approaches for imputation of scaled scores in the presence of missing item scores in educational tests. Smits et al (2002) compared several missing-data imputation approaches with respect to their accuracy in imputation of grade point averages in the presence of missing grades on several courses.…”
Section: Analysis Of Missing Data In Educational Measurementmentioning
confidence: 99%
“…Measurement professionals are familiar with the phenomenon of missing item scores, or, incomplete tests, on educational assessments. A wide variety of problems related to missing item scores in educational or psychological measurement were tackled by researchers such as De Ayala et al (2001), Finch (2008), Feinberg (2020), Sinharay (2021), Smits et al (2002), and Xiao and Bulut (2020).…”
Section: Introductionmentioning
confidence: 99%