2019
DOI: 10.3389/fpsyg.2019.02494
|View full text |Cite
|
Sign up to set email alerts
|

Sample Size Requirements for Applying Mixed Polytomous Item Response Models: Results of a Monte Carlo Simulation Study

Abstract: Mixture models of item response theory (IRT) can be used to detect inappropriate category use. Data collected by panel surveys where attitudes and traits are typically assessed by short scales with many response categories are prone to response styles indicating inappropriate category use. However, the application of mixed IRT models to this data type can be challenging because of many threshold parameters within items. Up to now, there is very limited knowledge about the sample size required for an appropriat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 61 publications
0
14
0
Order By: Relevance
“…Forty-three individuals did not provide any valid values on the items of job satisfaction and were excluded from the analysis sample. According to current simulation findings (e.g., Cho 2013;Huang 2016;Jin and Wang 2014;Kutscher et al 2019), this sample size (namely 2000-2500 individuals per condition) should be sufficient for the application of the mixed polytomous IRT model to display optimal performance. Women comprised more than half of the entire sample (61%).…”
Section: Sample and Proceduresmentioning
confidence: 99%
See 1 more Smart Citation
“…Forty-three individuals did not provide any valid values on the items of job satisfaction and were excluded from the analysis sample. According to current simulation findings (e.g., Cho 2013;Huang 2016;Jin and Wang 2014;Kutscher et al 2019), this sample size (namely 2000-2500 individuals per condition) should be sufficient for the application of the mixed polytomous IRT model to display optimal performance. Women comprised more than half of the entire sample (61%).…”
Section: Sample and Proceduresmentioning
confidence: 99%
“…Within each experimental condition, we estimated the multidimensional rmGPCM including up to five latent classes and determined the best-fitting solution using the Bayesian information criterion (BIC; Schwarz 1978), which works well and is consistent in the context of complex models and large sample sizes (Dziak et al 2012). We purposely chose neither the Akaike's information criterion with the tripled number of model parameters (AIC3; Bozdogan 1994) nor the sample-size adjusted BIC (SABIC; Sclove 1987), both of which showed a good performance for model selection in unidimensional polytomous IRT models (Kutscher et al 2019). However, there is a lack of evidence concerning their performance for multidimensional IRT models.…”
Section: Statistical Analysesmentioning
confidence: 99%
“…The sample was a result of pooled data from 14 crosssectional studies in Brazil that utilized the DLQI-BRA to assess HRQOL [25][26][27][28][29][30][31][32][33]. The sample size (n = 1286) was assumed to be sufficient in the IRT parameter estimation, dimensionality assessment and DIF and generalized linear model analyses, adjusted for up to eight dummy variables [50][51][52]. All data was related to completely filled questionnaires; there was no available information regarding the number of distributed questionnaires or the percentage of incomplete questionnaires in each original study.…”
Section: Methodsmentioning
confidence: 99%
“…The analysis output revealed that the item parameters estimated for Class 1 were labeled as Class 2. This problem was solved by taking the estimated item parameter values as starting values (Kutscher et al, 2019). When item difficulty parameters are examined for South Africa data, it is observed that item difficulty parameters in Class 1 range from -3.017 (item 3) to 2.847 (item 1) while item difficulty parameters in Class 2 range from -0.639 (item 15) to 5.987 (item 1).…”
Section: Resultsmentioning
confidence: 99%
“…In this case, the label switching problem can be solved by taking the estimated item parameter values as starting values. Model-data fit index values are not affected by Label Switching (Kutscher, Eid, & Crayen, 2019). Table 2 shows the information criteria indices obtained from the analyses aimed at determining which Mixture IRT model (Rasch, 1PL, 2PL, and 3 PL) the data obtained from the science subtest for Singapore, Turkey, and South Africa that attended the 8th grade TIMSS 2015 fits best: When AIC and BIC values are examined in general, it is observed that the AIC and BIC values for Singapore data are lower in the two-class Mixture Rasch model.…”
Section: Label Switchingmentioning
confidence: 97%