1972
DOI: 10.1002/tea.3660090410
|View full text |Cite
|
Sign up to set email alerts
|

The power of statistical tests in science teachnig research

Abstract: A calculation of the probability of rejecting H0 when it should be rejected (power) was completed on each of the 66 applicable articles in Volumes 6 and 7 (1969, 1970) of the Journal of Research in Science Teaching. These power calculations utilized the effect size definitions and tables developed by Cohen (1969). The mean power of each article to detect small, medium, and large effect sizes was determined from its major statistical tests. These mean powers were then compiled and analyzed. The powers calculate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
9
0

Year Published

1983
1983
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 1 publication
2
9
0
Order By: Relevance
“…This reflects the all too common practice of testing large matrices of coefficients for all possible pairings of study variables (and inviting the wrath of capitalization on chance). Despite this, however, the relative frequency of each test seems to be in line with what has been seen in other surveys (Chase & Chase, 1976), as well as the earlier analysis of this journal (Penick & Brewer, 1972). Also, the mean sample size for the studies in Volumes 14-17, x = 154.6, has remained relatively fved in JRST over the past decade (see Table 11).…”
Section: Resultssupporting
confidence: 86%
See 2 more Smart Citations
“…This reflects the all too common practice of testing large matrices of coefficients for all possible pairings of study variables (and inviting the wrath of capitalization on chance). Despite this, however, the relative frequency of each test seems to be in line with what has been seen in other surveys (Chase & Chase, 1976), as well as the earlier analysis of this journal (Penick & Brewer, 1972). Also, the mean sample size for the studies in Volumes 14-17, x = 154.6, has remained relatively fved in JRST over the past decade (see Table 11).…”
Section: Resultssupporting
confidence: 86%
“…Since that time the statistical power of published tests has been at issue in fields as diverse as communication (Chase & Tucker, 1975;Katzer & Sodt, 1973), gerontology (Levenson, 1980), and medicine (Freiman et al, 1978). Ten years have passed since the Journal of Research in Science Teaching made its contribution to this debate through the work of Penick and Brewer (1972).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, if they cannot tolerate the adjusted levels of alpha, power or both, that is, if alpha is too high or power too low to be justifiable from a research point of view, then they should consider postponing the study until they can get a larger sample. Cohen (1962), Brewer (1972), Jones and Brewer (1972), Penick and Brewer (1972), Brewer and Owen (1973), Katzer and Sodt (1973), and Chase and Chase (1976), respectively, the researchers found a dismal state of affairs with regard to the power of the tests conducted, as indicated in Table 2. Table 2: Results of studies on the power of hypotheses tests…”
Section: Implications Of Not Meeting Sample Size Requirementsmentioning
confidence: 99%
“…With the exception of the studies analysed by Penick and Brewer (1972) and Brewer and Owen (1973), in which the mean power of the tests was computed at .71 and .72 respectively, the rest of the tests in the journals surveyed had a more or less 50-50 chance of detecting a medium effect. This implies that the researchers who conducted hypothesis testing at such low levels of power would have saved a lot of time and energy had they just tossed a coin in deciding whether or not to reject since the probability of a correct rejection was approximately 1 /2.…”
Section: Implications Of Not Meeting Sample Size Requirementsmentioning
confidence: 99%