Exploratory factor analysis (EFA) is used routinely in the development and validation of assessment instruments. One of the most significant challenges when one is performing EFA is determining how many factors to retain. Parallel analysis (PA) is an effective stopping rule that compares the eigenvalues of randomly generated data with those for the actual data. PA takes into account sampling error, and at present it is widely considered the best available method. We introduce a variant of PA that goes even further by reproducing the observed correlation matrix rather than generating random data. Comparison data (CD) with known factorial structure are first generated using 1 factor, and then the number of factors is increased until the reproduction of the observed eigenvalues fails to improve significantly. We evaluated the performance of PA, CD with known factorial structure, and 7 other techniques in a simulation study spanning a wide range of challenging data conditions. In terms of accuracy and robustness across data conditions, the CD technique outperformed all other methods, including a nontrivial superiority to PA. We provide program code to implement the CD technique, which requires no more specialized knowledge or skills than performing PA.
A number of recent studies have used Meehl's (1995) taxometric method to determine empirically whether one should model assessment-related constructs as categories or dimensions. The taxometric method includes multiple data-analytic procedures designed to check the consistency of results. The goal is to differentiate between strong evidence of categorical structure, strong evidence of dimensional structure, and ambiguous evidence that suggests withholding judgment. Many taxometric consistency tests have been proposed, but their use has not been operationalized and studied rigorously. What tests should be performed, how should results be combined, and what thresholds should be applied? We present an approach to consistency testing that builds on prior work demonstrating that parallel analyses of categorical and dimensional comparison data provide an accurate index of the relative fit of competing structural models. Using a large simulation study spanning a wide range of data conditions, we examine many critical elements of this approach. The results provide empirical support for what marks the first rigorous operationalization of consistency testing. We discuss and empirically illustrate guidelines for implementing this approach and suggest avenues for future research to extend the practice of consistency testing to other techniques for modeling latent variables in the realm of psychological assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.