This study investigated the performance of three selected approaches to estimating a two-phase mixture model, where the first phase was a two-class latent class analysis model and the second phase was a linear growth model with four time points. The three evaluated methods were (a) one-step approach, (b) three-step approach, and (c) case-weight approach. As a result, some important results were demonstrated. First, the case-weight and three-step approaches demonstrated higher convergence rate than the one-step approach. Second, it was revealed that case-weight and three-step approaches generally did better in correct model selection than the one-step approach. Third, it was revealed that parameters were similarly recovered well by all three approaches for the larger class. However, parameter recovery for the smaller class differed between the three approaches. For example, the case-weight approach produced constantly lower empirical standard errors. However, the estimated standard errors were substantially underestimated by the case-weight and three-step approaches when class separation was low. Also, bias was substantially higher for the case-weight approach than the other two approaches.
The purpose of this article is to demonstrate ways to model nonlinear growth using three testing occasions. We demonstrate our growth models in the context of curriculum-based measurement using the fall, winter, and spring passage reading fluency benchmark assessments. We present a brief technical overview that includes the limitations of a growth model with three time points, and how nonlinear growth can be modeled and the associated limitations. We present results for a piecewise growth mixture modeling approach to model nonlinear growth for 1 to 3 classes, as well as to further explain individual differences and to capture heterogeneity of growth patterns. We discuss our interpretation of these results, as well as the implications of different methods for modeling nonlinear growth with three occasions.
Model specification issues on the cross-level two-way differential item functioning model were previously investigated by Patarapichayatham et al. (2009). Their study clarified that an incorrect model specification can easily lead to biased estimates of key parameters. The objective of this article is to provide further insights on the issue by specifically focusing on the impact of model selection strategies. Six model selection strategies were compared in this study. Through analyses of repeatedly simulated data, frequencies of each model being selected as the best model and parameter estimates were evaluated. As a result, it was found that the Bayesian information criterion (BIC) strategy tended to choose incomplete models more often than other strategies and led to more biased parameter estimates.
This study attempted to interpret differential item discriminations between individual and cluster levels by focusing on patterns and magnitudes of item discriminations under 2PL multilevel IRT model through a set of variety simulation conditions. The consistency between the mean of individual-level ability estimates and cluster-level ability estimates was evaluated by the correlations between them. As a result, it was found that they were highly correlated if the patterns of item discriminations were the same for both individual and cluster levels. The magnitudes of item discriminations themselves did not affect much on correlations, as far as the patterns were the same at the two levels. However, it was found that the correlation became lower when the patterns of item discriminations were different between the individual and cluster levels. Also, it was revealed that the mean of the estimated individual-level abilities would not be necessarily a good representation of the cluster-level ability, if the patterns were different at the two levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.