Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the conditional relations is often a tedious and error-prone task. This article provides an overview of methods used to probe interaction effects and describes a unified collection of freely available online resources that researchers can use to obtain significance tests for simple slopes, compute regions of significance, and obtain confidence bands for simple slopes across the range of the moderator in the MLR, HLM, and LCA contexts. Plotting capabilities are also provided.
Monte Carlo computer simulations were used to investigate the performance of three X 2 test statistics in confirmatory factor analysis (CFA). Normal theory maximum likelihood )~2 (ML), Browne's asymptotic distribution free X 2 (ADF), and the Satorra-Bentler rescaled X 2 (SB) were examined under varying conditions of sample size, model specification, and multivariate distribution. For properly specified models, ML and SB showed no evidence of bias under normal distributions across all sample sizes, whereas ADF was biased at all but the largest sample sizes. ML was increasingly overestimated with increasing nonnormality, but both SB (at all sample sizes) and ADF (only at large sample sizes) showed no evidence of bias. For misspecified models, ML was again inflated with increasing nonnormality, but both SB and ADF were underestimated with increasing nonnormality. It appears that the power of the SB and ADF test statistics to detect a model misspecification is attenuated given nonnormally distributed data.Confirmatory factor analysis (CFA) has become an increasingly popular method of investigating the structure of data sets in psychology. In contrast to traditional exploratory factor analysis that does not place strong a priori restrictions on the structure of the model being tested, CFA requires the investigator to specify both the number of factors and the specific pattern of loadings of each of the measured variables on the underlying set of factors. In typical simple CFA models, each measured variable is hypothesized to load on only one factor, and positive, negative, or zero (orthogonal) correlations are specified between the factors. Such models can provide strong evidence about the convergent and discriminant validity of a set of measured variables and allow tests among a set of theories of measurement structure. More complicated CFA models may specify more complex patterns of factor loadings, correlations among errors or specific factors, or both. In all cases, CFA models set restrictions on the factor loadings, the correlations between factors, and the correlations between errors of measurement that permit tests of the fit of the hypothesized model to the data.There are two general classes of assumptions that underlie the statistical methods used to estimate CFA models: distributional and structural (Satorra, 1990). Normal theory maximum likelihood (ML) estimation has been used to analyze the majority of CFA models. ML makes the distributional assumption that the measured variables have a multivariate normal distribution in the pop-16
Confirmatory factor analysis (CFA) is widely used for examining hypothesized relations among ordinal variables (e.g., Likert-type items). A theoretically appropriate method fits the CFA model to polychoric correlations using either weighted least squares (WLS) or robust WLS. Importantly, this approach assumes that a continuous, normal latent process determines each observed variable. The extent to which violations of this assumption undermine CFA estimation is not well-known. In this article, the authors empirically study this issue using a computer simulation study. The results suggest that estimation of polychoric correlations is robust to modest violations of underlying normality. Further, WLS performed adequately only at the largest sample size but led to substantial estimation difficulties with smaller samples. Finally, robust WLS performed well across all conditions. Variables characterized by an ordinal level of measurement are common in many empirical investigations within the social and behavioral sciences. A typical situation involves the development or refinement of a psychometric test or survey in which a set of ordinally scaled items (e.g., 0 = strongly disagree, 1 = neither agree nor disagree, 2 = strongly agree) is used to assess one or more psychological constructs. Although the individual items are designed to measure a theoretically continuous construct, the observed responses are discrete realizations of a small number of categories. Statistical methods that assume continuous distributions are often applied to observed measures that are ordinally scaled. In circumstances such as these, there is the potential for a critical mismatch between the assumptions underlying the statistical model and the empirical characteristics of the data to be analyzed. This mismatch in turn undermines confidence in the validity of the conclusions that are drawn from empirical data with respect to a theoretical model of interest (e.g., Shadish, Cook, & Campbell, 2002).This problem often arises in confirmatory factor analysis (CFA), a statistical modeling method commonly used in many social science disciplines. CFA is a member of the more general family of structural equation models (SEMs) and provides a powerful method for testing a variety of hypotheses about a set of measured variables. By far the most common method of estimation within CFA is maximum likelihood (ML), a technique which assumes that the observed variables are continuous and normally distributed (e.g., Bollen, 1989, pp. 131-134). These assumptions are not met when the observed data are discrete (as occurs when using ordinal scales), thus significant problems can result when fitting CFA models Correspondence should be addressed to Patrick J. Curran, Department of Psychology, University of North Carolina, Chapel Hill, NC 27599-3270. curran@unc.edu. Additional materials are on the web at http://dx.doi.org/10.1037/1082-989X.9.4.466.supp. NIH Public Access Author ManuscriptPsychol Methods. Author manuscript; available in PMC 2011 August 9. NIH...
Longitudinal models are becoming increasingly prevalent in the behavioral sciences, with key advantages including increased power, more comprehensive measurement, and establishment of temporal precedence. One particularly salient strength offered by longitudinal data is the ability to disaggregate between-person and within-person effects in the regression of an outcome on a timevarying covariate. However, the ability to disaggregate these effects has not been fully capitalized upon in many social science research applications. Two likely reasons for this omission are the general lack of discussion of disaggregating effects in the substantive literature and the need to overcome several remaining analytic challenges that limit existing quantitative methods used to isolate these effects in practice. This review explores both substantive and quantitative issues related to the disaggregation of effects over time, with a particular emphasis placed on the multilevel model. Existing analytic methods are reviewed, a general approach to the problem is proposed, and both the existing and proposed methods are demonstrated using several artificial data sets. Potential limitations and directions for future research are discussed, and recommendations for the disaggregation of effects in practice are offered.
Growth mixture models are often used to determine if subgroups exist within the population that follow qualitatively distinct developmental trajectories. However, statistical theory developed for finite normal mixture models suggests that latent trajectory classes can be estimated even in the absence of population heterogeneity if the distribution of the repeated measures is nonnormal. By drawing on this theory, this article demonstrates that multiple trajectory classes can be estimated and appear optimal for nonnormal data even when only 1 group exists in the population. Further, the within-class parameter estimates obtained from these models are largely uninterpretable. Significant predictive relationships may be obscured or spurious relationships identified. The implications of these results for applied research are highlighted, and future directions for quantitative developments are suggested.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.