Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed.
A framework for hypothesis testing and power analysis in the assessment of fit of covariance structure models is presented. We emphasize the value of confidence intervals for fit indices, and we stress the relationship of confidence intervals to a framework for hypothesis testing. The approach allows for testing null hypotheses of not-good fit, reversing the role of the null hypothesis in conventional tests of model fit, so that a significant result provides strong support for good fit. The approach also allows for direct estimation of power, where effect size is defined in terms of a null and alternative value of the root-mean-square error of approximation fit index proposed by J. H. Steiger and J. M. Lind (1980). It is also feasible to determine minimum sample size required to achieve a given level of power for any test of fit in this framework. Computer programs and examples are provided for power analyses and calculation of minimum sample sizes.
The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis.In the factor analysis literature, much attention has be;;n given to the issue of sample size. It is widely understood that the use of larger samples in applications of factor analysis tends to provide results such that sample factor loadings are more precise estimates of population loadings and are also more stable, or les s variable, across repeated sampling. Despite general agreement on this matter, there is considerable di'/ergence of opinion and evidence about the question of how large a sample is necessary to adequately acnieve these objectives. Recommendations and findings about this issue are diverse and often contradictory. The objectives of this article are to provide a
The authors examine the practice of dichotomization of quantitative measures, wherein relationships among variables are examined after 1 or more variables have been converted to dichotomous variables by splitting the sample at some point on the scale(s) of measurement. A common form of dichotomization is the median split, where the independent variable is split at the median to form high and low groups, which are then compared with respect to their means on the dependent variable. The consequences of dichotomization for measurement and statistical analyses are illustrated and discussed. The use of dichotomization in practice is described, and justifications that are offered for such usage are examined. The authors present the case that dichotomization is rarely defensible and often will yield misleading results.We consider here some simple statistical procedures for studying relationships of one or more independent variables to one dependent variable, where all variables are quantitative in nature and are measured on meaningful numerical scales. Such measures are often referred to as individual-differences measures, meaning that observed values of such measures are interpretable as reflecting individual differences on the attribute of interest. It is of course straightforward to analyze such data using correlational methods. In the case of a single independent variable, one can use simple linear regression and/or obtain a simple correlation coefficient. In the case of multiple independent variables, one can use multiple regression, possibly including interaction terms. Such methods are routinely used in practice.However, another approach to analysis of such data is also rather widely used. Considering the case of one independent variable, many investigators begin by converting that variable into a dichotomous variable by splitting the scale at some point and designating individuals above and below that point as defining two separate groups. One common approach is to split the scale at the sample median, thereby defining high and low groups on the variable in question; this approach is referred to as a median split. Alternatively, the scale may be split at some other point based on the data (e.g., 1 standard deviation above the mean) or at a fixed point on the scale designated a priori. Researchers may dichotomize independent variables for many reasons-for example, because they believe there exist distinct groups of individuals or because they believe analyses or presentation of results will be simplified. After such dichotomization, the independent variable is treated as a categorical variable and statistical tests then are carried out to determine whether there is a significant difference in the mean of the dependent variable for the two groups represented by the dichotomized independent variable. When there are two independent variables, researchers often dichotomize both and then analyze effects on the dependent variable using analysis of variance (ANOVA).There is a considerable methodological literature exam...
This chapter presents a review of applications of structural equation modeling (SEM) published in psychological research journals in recent years. We focus first on the variety of research designs and substantive issues to which SEM can be applied productively. We then discuss a number of methodological problems and issues of concern that characterize some of this literature. Although it is clear that SEM is a powerful tool that is being used to great benefit in psychological research, it is also clear that the applied SEM literature is characterized by some chronic problems and that this literature can be considerably improved by greater attention to these issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.