2018
DOI: 10.1177/0146621618813104
|View full text |Cite
|
Sign up to set email alerts
|

Q-Matrix Refinement Based on Item Fit Statistic RMSEA

Abstract: A Q-matrix, which reflects how attributes are measured for each item, is necessary when applying a cognitive diagnosis model to an assessment. In most cases, the Q-matrix is constructed by experts in the field and may be subjective and incorrect. One efficient method to refine the Q-matrix is to employ a suitable statistic that is calculated using response data. However, this approach is limited by its need to estimate all items in the Q-matrix even if only some are incorrect. To address this challenge, this s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Overall, there are convincing evidence for supporting the fit of the ACDM model to the data. The large values of fit indices in this study could be due to the small sample size (N = 500) and the large number of test items (J = 35) (Kang, Yang, & Zeng, 2018;Kunina-Habenicht et al, 2009;Lei & Li, 2016;Lin & Weng, 2014). In addition to checking the absolute fit of the model to the data, the fit of the ACDM was testified by estimating classification consistency P c and classification accuracy P a .…”
Section: Model Fitmentioning
confidence: 99%
“…Overall, there are convincing evidence for supporting the fit of the ACDM model to the data. The large values of fit indices in this study could be due to the small sample size (N = 500) and the large number of test items (J = 35) (Kang, Yang, & Zeng, 2018;Kunina-Habenicht et al, 2009;Lei & Li, 2016;Lin & Weng, 2014). In addition to checking the absolute fit of the model to the data, the fit of the ACDM was testified by estimating classification consistency P c and classification accuracy P a .…”
Section: Model Fitmentioning
confidence: 99%
“…Furthermore, it would be interesting to study if the inclusion of model fit indices to the iterative procedure could improve its performance. For instance, Kang et al (2019) used the item-level version of the root mean square error approximation (RMSEA), which provided good results under the DINA model. For the general CDM framework, Akaike’s information criterion (AIC; Akaike, 1974) and the Bayesian information criterion (BIC; Schwarzer, 1976), which have been previously used as fit indices in CDMs (e.g., Chen et al, 2013), could be good candidates at selecting the suggested q-vector .…”
Section: Discussionmentioning
confidence: 99%
“…For instance, the Q-matrix of a scholar exam of mathematical operations seems easier to specify (e.g., ''8 + 3 3 2,'' would be easily detected as measuring, for example, ''sum'' and ''multiplication,'' but not ''subtraction'' or ''division'') than the Q-matrix of a reading comprehension test, a clinical diagnostic test, or a test assessing students' competencies (e.g., Sorrel et al [2016] reported lower inter-rater reliability for more abstract attributes like ''Study attitudes'' compared with attributes easier to objectivize like ''Helping others''). In fact, the Q-matrix of the popular fraction-subtraction data set (Tatsuoka, 1990), which does not belong to a particularly ambiguous knowledge domain, is still controversial (Kang et al, 2019). Thus, the degree of uncertainty involved in the process could reasonably be higher than what has been assumed, especially when the response processes of the knowledge domain are somehow subjectively defined.…”
Section: The Gdi Methods Of Empirical Q-matrix Validationmentioning
confidence: 99%
“…Thus, there is an urgent need for an automatic and intelligent means to construct the Q-matrix. During the past decade, construction of a mapping relationship between items and latent attributes based on examinees' response data became a field undergoing intense study (e.g., Barnes, 2010;Chen, 2017;Chen et al, 2015Chen et al, , 2018Chen et al, , 2021Chiu, 2013;Chung, 2014;Close, 2012;de la Torre, 2008;Desmarais, 2012;Desmarais et al, 2012;Desmarais & Naceur, 2013;Kang et al, 2019;Lim & Drasgow, 2017;Liu, 2016;Liu et al, 2012Liu et al, , 2013Sun et al, 2014Sun et al, , 2015Wang et al, 2020;Xiang, 2013;Xu & Shang, 2018;Yu & Cheng, 2019).…”
Section: Introductionmentioning
confidence: 99%