Articles published in several prominent educational journals were examined to investigate the use of data-analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, we also catalogued whether: (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected based on power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. Our analyses imply that researchers rarely verify that validity assumptions are satisfied and accordingly typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. We offer many recommendations to rectify these shortcomings. Data Analytic Practices 3 Statistical Practises of Educational Researchers:An Analysis of Their ANOVA, MANOVA and ANCOVA Analyses It is well known that the volume of published educational research is increasing at a very rapid pace. As a consequence of the expansion of the field, qualitative and quantitative reviews of the literature are becoming more common. These reviews typically focus on summarizing the results of research in particular areas of scientific inquiry (e.g., academic achievement or English as a second language) as a means of highlighting important findings and identifying gaps in the literature. Less common, but equally important, are reviews that focus on the research process, that is, the methods by which a research topic is addressed, including research design and statistical analysis issues.Methodological research reviews have a long history (e.g., Edgington, 1964; Elmore & Woehlke, 1988 Goodwin & Goodwin, 1985a, 1985bWest, Carmody, & Stallings, 1983).One purpose of these reviews has been the identification of trends in data-analytic practice. The documentation of such trends has a two-fold purpose: (a) it can form the basis for recommending improvements in research practice, and (b) it can be used as a guide for the types of inferential procedures that should be taught in methodological courses, so that students have adequate skills to interpret the published literature of a discipline and to carry out their own projects.One consistent finding of methodological research reviews is that a substantial gap often exists between the inferential methods that are recommended in the statistical research literature and those techniques that are actually adopted by applied researchers (Goodwin & Goodwin, 1985b;Ridgeway, Dunston, & Qian, 1993). The practice of relying on traditional methods of analysis is, however, dangerous. The field of statistics is by no means static; improveme...
Repeated measures ANOVA can refer to many different types of analysis. Speci®cally, this vague term can refer to conventional tests of signi®cance, one of three univariate solutions with adjusted degrees of freedom, two different types of multivariate statistic, or approaches that combine univariate and multivariate tests. Accordingly, it is argued that, by only reporting probability values and referring to statistical analyses as repeated measures ANOVA, authors convey neither the type of analysis that was used nor the validity of the reported probability value, since each of these approaches has its own strengths and weaknesses. The various approaches are presented with a discussion of their strengths and weaknesses, and recommendations are made regarding the`best' choice of analysis. Additional topics discussed include analyses for missing data and tests of linear contrasts.
Articles published in several prominent educational journals were examined to investigate the use of data-analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, we also catalogued whether: (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected based on power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. Our analyses imply that researchers rarely verify that validity assumptions are satisfied and accordingly typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. We offer many recommendations to rectify these shortcomings.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic. The Welch-James approach is known to provide generally robust tests of treatment effects in a repeated measures between-by within-subjects design under assumption violations given certain sample size requirements. The mixed-model F tests were based on Kenward-Roger’s adjusted degrees of freedom solution, an approach specifically proposed for small sample settings. The authors investigated Type I error control for repeated measures main and interaction effects in unbalanced designs when normality and covariance homogeneity assumptions did not hold. The mixed-model Kenward-Roger’s adjusted F tests showed superior Type I error control in small sample size conditions in which the Welch-James type statistic was nonrobust; power rates, however, did not favor one approach over the other.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.