Articles published in several prominent educational journals were examined to investigate the use of data-analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, we also catalogued whether: (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected based on power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. Our analyses imply that researchers rarely verify that validity assumptions are satisfied and accordingly typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. We offer many recommendations to rectify these shortcomings. Data Analytic Practices 3 Statistical Practises of Educational Researchers:An Analysis of Their ANOVA, MANOVA and ANCOVA Analyses It is well known that the volume of published educational research is increasing at a very rapid pace. As a consequence of the expansion of the field, qualitative and quantitative reviews of the literature are becoming more common. These reviews typically focus on summarizing the results of research in particular areas of scientific inquiry (e.g., academic achievement or English as a second language) as a means of highlighting important findings and identifying gaps in the literature. Less common, but equally important, are reviews that focus on the research process, that is, the methods by which a research topic is addressed, including research design and statistical analysis issues.Methodological research reviews have a long history (e.g., Edgington, 1964; Elmore & Woehlke, 1988 Goodwin & Goodwin, 1985a, 1985bWest, Carmody, & Stallings, 1983).One purpose of these reviews has been the identification of trends in data-analytic practice. The documentation of such trends has a two-fold purpose: (a) it can form the basis for recommending improvements in research practice, and (b) it can be used as a guide for the types of inferential procedures that should be taught in methodological courses, so that students have adequate skills to interpret the published literature of a discipline and to carry out their own projects.One consistent finding of methodological research reviews is that a substantial gap often exists between the inferential methods that are recommended in the statistical research literature and those techniques that are actually adopted by applied researchers (Goodwin & Goodwin, 1985b;Ridgeway, Dunston, & Qian, 1993). The practice of relying on traditional methods of analysis is, however, dangerous. The field of statistics is by no means static; improveme...
Articles published in several prominent educational journals were examined to investigate the use of data-analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, we also catalogued whether: (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected based on power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. Our analyses imply that researchers rarely verify that validity assumptions are satisfied and accordingly typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. We offer many recommendations to rectify these shortcomings.
The presence of variance heterogeneity and nonnormality in educational and psychological data may frequently invalidate the use of the analysis of variance (ANOVA) F test in one-way independent groups designs. This article offers recommendations to applied researchers on the use of various parametric and nonparametric alternatives to the F test under assumption violation conditions. Meta-analytic techniques were used to summarize the statistical robustness literature on the Type I error properties of the Brown-Forsythe (Brown & Forsythe, 1974 ), James (1951) second-order, Kruskal-Wallis ( Kruskal & Wallis, 1952 ), and Welch (1951) tests. Two variables, based on the theoretical work of Box (1954) , are shown to be highly effective in deciding when a particular alternative procedure should be adopted. Based on the meta-analysis findings, it is recommended that researchers gain a clear understanding of the nature of their data before conducting statistical analyses. Of all of the procedures, the James and Welch tests performed best under violations of the variance homogeneity assumption, although their sensitivity to certain types of nonnormality may preclude their use in all data-analytic situations. Opportunities for further methodological studies of ANOVA alternative procedures are also discussed.
Four pairwise multiple comparison procedures for achieving approximate familywise Type I error control were investigated when multisample sphericity was violated. The test statistic in all cases was the ratio of the corresponding sample mean difference divided by an estimate of its variance. Bonferroni. Studentized range, and Studentized maximum modulus critical values, each with Satterthwaite degrees of freedom, and an analog of the Cochran critical value were used with the test statistic. Results indicated that all procedures, except for the Cochran procedure, provided reasonable Type I error control in most cases. The Cochran procedure generally was very conservative.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.