The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw scores. This research will evaluate several approaches found in the literature using an approach that evaluates how the inclusion of scoring related to the selection/nonselection of both relevant and irrelevant information is incorporated extending Wilson’s approach. Results indicated all methods have potential, but the plus/minus and true/false methods seemed the most promising for items using the “select all that apply” instruction set. Additionally, these methods showed a large increase in information per time unit over the dichotomous method.
Increasing use of innovative items in operational assessments has shedded new light on the polytomous testlet models. In this study, we examine performance of several scoring models when polytomous items exhibit random testlet effects. Four models are considered for investigation: the partial credit model (PCM), testlet-as-a-polytomousitem model (TPIM), random-effect testlet model (RTM), and fixed-effect testlet model (FTM). The performance of the models was evaluated in two adaptive testings where testlets have nonzero random effects. The outcomes of the study suggest that, despite the manifest random testlet effects, PCM, FTM, and RTM perform comparably in trait recovery and examinee classification. The overall accuracy of PCM and FTM in trait inference was comparable to that of RTM. TPIM consistently underestimated population variance and led to significant overestimation of measurement precision, showing limited utility for operational use. The results of the study provide practical implications for using the polytomous testlet scoring models.
The paper presents adaptive testing strategies for polytomous technology-enhanced innovative items. We investigate item selection methods that match examinee's ability levels in location and explore ways to leverage test-taking speeds during item selection. Existing approaches to selecting polytomous items are mostly based on information measures and tend to experience a skewed item pool usage problem. In this study, we introduce location indices for polytomous items and show that location-matched item selection significantly improves the usage problem and achieves more diverse item sampling. We also contemplate matching items' time intensities so that testing times can be regulated across the examinees. Monte Carlo simulation was conducted to examine the performance of different item selection methods. Numerical experiment suggests that location-matched item selection achieves significantly better and more balanced item pool usage. Leveraging working speed in item selection distinctly reduced the average testing times as well as variation across the examinees. Both the procedures incurred marginal measurement cost (e.g., precision, efficiency) and yet showed significant improvement in administrative outcomes. The experiment in two test settings also suggested that the procedures can lead to different administrative gains depending on the test design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.