The purpose of this studywas to investigate the relationship between sample size and the quality of factor solutions obtained from exploratory factor analysis. This research expanded upon the range of conditions previously examined, employing a broad selection of criteria for the evaluation of the quality of sample factor solutions. Results showed that when communalities are high, sample size tended to have less influence on the quality of factor solutions than when communalities are low. Overdetermination of factors was also shown to improve the factor analysis solution. Finally, decisions about the quality of the factor solution depended upon which criteria were examined.
This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohen's (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birge's (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta-analyses in light of the findings are provided.
Given the statistical bias that occurs when design effects of complex data are not incorporated or sample weights are omitted, this study calls for improvement in the dissemination of research findings based on complex sample data. Authors, editors, and reviewers need to work together to improve the transparency of published findings using complex sample data.
Centering variables prior to the analysis of moderated multiple regression equations has been advocated for reasons both statistical (reduction of multicollinearity) and substantive (improved interpretation of the resulting regression equations). This article provides a comparison of centered and raw score analyses in least squares regression. The two methods are demonstrated to be equivalent, yielding identical hypothesis tests associated with the moderation effect and regression equations that are functionally equivalent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.