Assessing overall fit is a topic of keen interest to structural equation modelers, yet measuring goodness of fit has been hampered by several factors. First, the assumptions that underlie the chi-square tests of model fit often are violated. Second, many fit measures (e.g., Bentler and Bonett's [1980] normed fit index) have unknown statistical distributions so that hypothesis testing, confidence intervals, or comparisons of significant differences in these fit indices are not possible. Finally, modelers have little knowledge about the distribution and behavior of the fit measures for misspecified models or for nonnested models. Given this situation, bootstrapping techniques would appear to be an ideal means to tackle these problems. Indeed, Bentler's (1989) EQS 3.0 and Jöreskog and Sörbom's (forthcoming) LISREL 8 have bootstrap resampling options to bootstrap fit indices. In this article the authors (a) demonstrate that the usual bootstrapping methods will fail when applied to the original data, (b) explain why this occurs, and, (c) propose a modified bootstrap method for the chi-square test statistic for model fit. They include simulated and empirical examples to illustrate their results.
Bootstrap methods are a collection of sample re-use techniques designed to estimate standard errors and confidence intervals. Making use of numerous samples drawn from the initial observations, these techniques require fewer assumptions and offer greater accuracy and insight than do standard methods in many problems. After presenting the underlying concepts, this introduction focuses on applications in regression analysis. These applications contrast two forms of bootstrap resampling in regression, illustrating their differences in a series of examples that include outliers and heteroscedasticity. Other regression examples use the bootstrap to estimate standard errors of robust estimators in regression and indirect effects in path models. Numerous variations of bootstrap confidence intervals exist, and examples stress the concepts that are common to the various approaches. Suggestions for computing bootstrap estimates appear throughout the discussion, and a section on computing suggests several broad guidelines.
α-investing is an adaptive sequential methodology that encompasses a large family of procedures for testing multiple hypotheses. All control mFDR, which is the ratio of the expected number of false rejections to the expected number of rejections. mFDR is a weaker criterion than the false discovery rate, which is the expected value of the ratio. We compensate for this weakness by showing that α-investing controls mFDR at every rejected hypothesis. α-investing resembles α-spending that is used in sequential trials, but it has a key difference. When a test rejects a null hypothesis, α-investing earns additional probability towards subsequent tests. α-investing hence allows us to incorporate domain knowledge into the testing procedure and to improve the power of the tests. In this way, α-investing enables the statistician to design a testing procedure for a specific problem while guaranteeing control of mFDR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.