2020
DOI: 10.1177/0146621620977682
|View full text |Cite
|
Sign up to set email alerts
|

Improving Accuracy and Usage by Correctly Selecting: The Effects of Model Selection in Cognitive Diagnosis Computerized Adaptive Testing

Abstract: Decisions on how to calibrate an item bank might have major implications in the subsequent performance of the adaptive algorithms. One of these decisions is model selection, which can become problematic in the context of cognitive diagnosis computerized adaptive testing, given the wide range of models available. This article aims to determine whether model selection indices can be used to improve the performance of adaptive tests. Three factors were considered in a simulation study, that is, calibration sample… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 35 publications
0
17
0
Order By: Relevance
“…As it is often done in CAT studies (e.g., Mulder & van der Linden, 2009), the item bank is assumed to be precisely calibrated, thus the true model parameters were taken as known to compute the trait scores. This allows to compute the upper-limit performance of the adaptive assessment and what is expected to obtain in practical settings provided the item parameters are accurately estimated (e.g., Sorrel et al, 2021). The R codes used for data generation and the simulation study are available from the corresponding author upon request.…”
Section: Methodsmentioning
confidence: 99%
“…As it is often done in CAT studies (e.g., Mulder & van der Linden, 2009), the item bank is assumed to be precisely calibrated, thus the true model parameters were taken as known to compute the trait scores. This allows to compute the upper-limit performance of the adaptive assessment and what is expected to obtain in practical settings provided the item parameters are accurately estimated (e.g., Sorrel et al, 2021). The R codes used for data generation and the simulation study are available from the corresponding author upon request.…”
Section: Methodsmentioning
confidence: 99%
“…In this matrix, referred to as Q-matrix (Tatsuoka, 1983), each q-entry (q jk ) will receive a value of 1 or 0 depending on whether item j measures attribute k or not, respectively. The Q-matrix construction process is usually supervised by domain experts (e.g., Li & Suen, 2013;Sorrel et al, 2016), although several empirical Q-matrix estimation and validation methods have been proposed in the last years with the aim of reducing the degree of subjectivity involved in the task (e.g., de la Torre & Chiu, 2016;Nájera, Sorrel et al, 2021). The correct specification of the Q-matrix is of major importance since the presence of misspecifications can greatly disrupt the accuracy of attribute profile classifications (Gao et al, 2017;Rupp & Templin, 2008).…”
Section: Overview Of Cognitive Diagnosis Modelsmentioning
confidence: 99%
“…Under challenging estimation conditions characterized, for example, by a small sample size, a larger estimation error can be expected to affect the performance of the CD-CAT [19]. To deal with this, Sorrel et al (2021) recommended performing a model comparison analysis at the item level in order to minimize the number of parameters to be estimated [20]. To this end, the authors used the two-step LR test, an efficient approach to the likelihood ratio test [21].…”
Section: Item Response Model and Item Bank Calibrationmentioning
confidence: 99%
“…This allowed for a more direct comparison of the different CAT implementations. In applied contexts, some sampling error is to be expected (for a discussion of this, see, for example, [19,20]). The plots were obtained using the cdcat.summary() function of the package, which uses the R package ggplot2 [41].…”
Section: Illustrationmentioning
confidence: 99%