The multiple-group categorical factor analysis (FA) model and the graded response model (GRM) are commonly used to examine polytomous items for differential item functioning to detect possible measurement bias in educational testing. In this study, the multiple-group categorical factor analysis model (MC-FA) and multiple-group normal-ogive GRM models are unified under the common framework of discretization of a normal variant. We rigorously justify a set of identified parameters and determine possible identifiability constraints necessary to make the parameters just-identified and estimable in the common framework of MC-FA. By doing so, the difference between categorical FA model and normal-ogive GRM is simply the use of two different sets of identifiability constraints, rather than the seeming distinction between categorical FA and GRM. Thus, we compare the performance on DIF assessment between the categorical FA and GRM approaches through simulation studies on the MC-FA models with their corresponding particular sets of identifiability constraints. Our results show that, under the scenarios with varying degrees of DIF for examinees of different ability levels, models with the GRM type of identifiability constraints generally perform better on DIF detection with a higher testing power. General guidelines regarding the choice of just-identified parameterization are also provided for practical use.
A speeded item response model is proposed. We consider the situation where examinees may retain the harder items to a later test period in a time limit test. With such a strategy, examinees may not finish answering some of the harder items within the allocated time. In the proposed model, we try to describe such a mechanism by incorporating a speeded-effect term into the two-parameter logistic item response model. A Bayesian estimation procedure of the current model using Markov chain Monte Carlo is presented, and its performance over the two-parameter logistic item response model in a speeded test is demonstrated through simulations. The methodology is applied to physics examination data of the Department Required Test for college entrance in Taiwan for illustration.
Traditional principal component analysis (PCA) suffers from high estimation variability and low interpretability in high‐dimensional data analysis. This article presents several regularization approaches for PCA by imposing structural constraints on eigenvectors to avoid overfitting and ease interpretation. Applying shrinkage, thresholding, smoothing, or rotation on eigenvectors leads to regularized PCA with enhanced interpretability.
The non‐response model in Knott et al. (1991, Statistician, 40, 217) can be represented as a tree model with one branch for response/non‐response and another branch for correct/incorrect response, and each branch probability is characterized by an item response theory model. In the model, it is assumed that there is only one source of non‐responses. However, in questionnaires or educational tests, non‐responses might come from different sources, such as test speededness, inability to answer, lack of motivation, and sensitive questions. To better accommodate such more realistic underlying mechanisms, we propose a a tree model with four end nodes, not all distinct, for non‐response modelling. The Laplace‐approximated maximum likelihood estimation for the proposed model is suggested. The validation of the proposed estimation procedure and the advantage of the proposed model over traditional methods are demonstrated in simulations. For illustration, the methodologies are applied to data from the 2012 Programme for International Student Assessment (PISA). The analysis shows that the proposed tree model has a better fit to PISA data than other existing models, providing a useful tool to distinguish the sources of non‐responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.