We provide conceptual introductions to missingness mechanisms--missing completely at random, missing at random, and missing not at random--and state-of-the-art methods of handling missing data--full-information maximum likelihood and multiple imputation--followed by a discussion of planned missing designs: Multiform questionnaire protocols, 2-method measurement models, and wave-missing longitudinal designs. We reviewed 80 articles of empirical studies published in the 2012 issues of the Journal of Pediatric Psychology to present a picture of how adequately missing data are currently handled in this field. To illustrate the benefits of using multiple imputation or full-information maximum likelihood and incorporating planned missingness into study designs, we provide example analyses of empirical data gathered using a 3-form planned missing design.
In multigroup factor analysis, different levels of measurement invariance are accepted as tenable when researchers observe a nonsignificant (Δ)χ2 test after imposing certain equality constraints across groups. Large samples yield high power to detect negligible misspecifications, so many researchers prefer alternative fit indices (AFIs). Fixed cutoffs have been proposed for evaluating the effect of invariance constraints on change in AFIs (e.g., Chen, 2007; Cheung & Rensvold, 2002; Meade, Johnson, & Braddy, 2008). We demonstrate that all of these cutoffs have inconsistent Type I error rates. As a solution, we propose replacing χ2 and fixed AFI cutoffs with permutation tests. Randomly permuting group assignment results in average between-groups differences of zero, so iterative permutation yields an empirical distribution of any fit measure under the null hypothesis of invariance across groups. Our simulations show that the permutation test of configural invariance controls Type I error rates better than χ2 or AFIs when the model contains parsimony error (i.e., negligible misspecification) but the factor structure is equivalent across groups (i.e., the null hypothesis is true). For testing metric and scalar invariance, Δχ2 and permutation yield similar power and nominal Type I error rates, whereas ΔAFIs yield inflated errors in smaller samples. Permuting the maximum modification index among equality constraints control familywise Type I error rates when testing multiple indicators for lack of invariance, but provide similar power as using a Bonferroni adjustment. An applied example and syntax for software are provided. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
In a frequentist framework, the exact fit of a structural equation model (SEM) is typically evaluated with the chi-square test and at least one index of approximate fit. Current Bayesian SEM (BSEM) software provides one measure of overall fit: the posterior predictive p value (PPPχ2). Because of the noted limitations of PPPχ2, common practice for evaluating Bayesian model fit instead focuses on model comparison, using information criteria or Bayes factors. Fit indices developed under maximum-likelihood estimation have not been incorporated into software for BSEM. We propose adapting 7 chi-square-based approximate fit indices for BSEM, using a Bayesian analog of the chi-square model-fit statistic. Simulation results show that the sampling distributions of the posterior means of these fit indices are similar to their frequentist counterparts across sample sizes, model types, and levels of misspecification when BSEMs are estimated with noninformative priors. The proposed fit indices therefore allow overall model-fit evaluation using familiar metrics of the original indices, with an accompanying interval to quantify their uncertainty. Illustrative examples with real data raise some important issues about the proposed fit indices’ application to models specified with informative priors, when Bayesian and frequentist estimation methods might not yield similar results.
These findings indicate that the CAPS-5 can be seen as measuring 2 distinct phenomena: posttraumatic stress disorder and general posttraumatic dysphoria. This is an important contribution to the current debate on which latent factors constitute PTSD and may reduce discordance. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.