Several approaches have been proposed for latent class modeling with external variables, including one-step, two-step, and three-step estimators. However, very little is known yet about the performance of these approaches when direct effects of the external variable to the indicators of latent class membership are present. In the current article, we compare those approaches and investigate the consequences of not modeling these direct effects when present, as well as the power of residual and fit statistics to identify such effects. The results of the simulations show that not modeling direct effect can lead to severe parameter bias, especially with a weak measurement model. Both residual and fit statistics can be used to identify such effects, as long as the number and strength of these effects is low and the measurement model is sufficiently strong.
The low-stakes character of international large-scale educational assessments implies that a participating student might at times provide unrelated answers as if s/he was not even reading the items and choosing a response option randomly throughout. Depending on the severity of this invalid response behavior, interpretations of the assessment results are at risk of being invalidated. Not much is known about the prevalence nor impact of such random responders in the context of international large-scale educational assessments. Following a mixture item response theory (IRT) approach, an initial investigation of both issues is conducted for the Confidence in and Value of Mathematics/Science (VoM/VoS) scales in the Trends in International Mathematics and Science Study (TIMSS) 2015 student questionnaire. We end with a call to facilitate further mapping of invalid response behavior in this context by the inclusion of instructed response items and survey completion speed indicators in the assessments and a habit of sensitivity checks in all secondary data studies.International large-scale educational assessments are used to describe, compare, and monitor student achievement in different educational domains and across different countries. In general, by providing information on contextual factors as provided by the student questionnaire and staff survey with respect to the learning processes that can be related to the student outcomes on the achievement tests, these assessments aim to inform curriculum and education policy to improve learning (e.g., Inter-
Questionnaires in educational research assessing students' attitudes and beliefs are low-stakes for the students. As a consequence, students might not always consistently respond to a questionnaire scale, but instead provide more random response patterns with no clear link to items' contents. We study inter-individual differences in students' intra-individual random responding profile across 19 questionnaire scales in the TIMSS 2015 eighth-grade student questionnaire in seven countries. A mixture IRT approach was used to assess students' random responder status on a questionnaire scale. A follow-up latent class analysis across the questionnaire revealed four random responding profiles that generalized across countries: A majority of consistent non-random responders, intermittent moderate random responders, frequent random responders, and students that were exclusively triggered to respond randomly on the confidence scales in the questionnaire. We discuss implications of our findings in light of general data-quality concerns and the potential ineffectiveness of early-warning monitoring systems in computer-based surveys.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.