Structural equation modeling (SEM) is a widespread approach to test substantive hypotheses in psychology and other social sciences. However, most studies involving structural equation models neither report statistical power analysis as a criterion for sample size planning nor evaluate the achieved power of the performed tests. In this tutorial, we provide a step-by-step illustration of how a priori, post hoc, and compromise power analyses can be conducted for a range of different SEM applications. Using illustrative examples and the R package semPower, we demonstrate power analyses for hypotheses regarding overall model fit, global model comparisons, particular individual model parameters, and differences in multigroup contexts (such as in tests of measurement invariance). We encourage researchers to yield reliable-and thus more replicable-results based on thoughtful sample size planning, especially if small or medium-sized effects are expected.
Comparability of measurement across different cultural groups is an essential prerequisite for any crosscultural assessment. However, cross-cultural measurement invariance is rarely achieved and detecting the source of noninvariance is often challenging. In particular, when different language versions of a measure are administered to different cultural groups, noninvariance on certain items may originate either from translation inconsistencies (translation bias) or from actual differences between cultural groups (culture bias). If, on the other hand, a measure is administered in a common language version (e.g., English), item noninvariance may also result from comprehension issues of nonnative speakers (comprehension bias).Here, we outline a procedure suitable for dissociating these sources of item noninvariance, termed the culture, comprehension, and translation bias (CCT) procedure. The CCT procedure is based on a betweensubjects design comparing samples from two different cultures who complete a measure in either the same or a different language version. We demonstrate in a simulation study and illustrate in an empirical example with actual cross-cultural data how performing multiple pairwise comparisons across (a) groups differing in culture but not in language, (b) groups differing in language but not in culture, and (c) groups differing in both culture and language allows to pinpoint the source of item noninvariance with high specificity. The CCT procedure thus provides a valuable tool for improving cross-cultural assessment through directing the process of item translation and cultural adaptation.
Public Significance StatementWe outline a procedure that allows disentangling potential sources of measurement noninvariance in cross-cultural assessment. By performing multiple pairwise comparisons across groups differing in culture, language, or a combination thereof, the effects on the measurement of cultural differences, translation inconsistencies, or comprehension issues due to completing a version of a measure in one's nonnative language can be assessed.
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of non-normality is considered. We conducted a Monte Carlo simulation manipulating the underlying multivariate distribution to assess the effect of the source of non-normality (latent, error, and marginal conditions with either multivariate normal or non-normal marginal distributions) on different measures of fit (empirical rejection rates for the likelihood-ratio model test statistic, the root mean square error of approximation, the standardized root mean square residual, and the comparative fit index). We considered different estimation methods (maximum likelihood, generalized least squares, and (un)modified asymptotically distribution-free), sample sizes, and the extent of non-normality in correctly specified and misspecified models to investigate their performance. The results show that all measures of fit were affected by the source of non-normality but with varying patterns for the analyzed estimation methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.