2015
DOI: 10.1037/met0000025
|View full text |Cite
|
Sign up to set email alerts
|

Meta-analysis using effect size distributions of only statistically significant studies.

Abstract: Publication bias threatens the validity of meta-analytic results and leads to overestimation of the effect size in traditional meta-analysis. This particularly applies to meta-analyses that feature small studies, which are ubiquitous in psychology. Here we develop a new method for meta-analysis that deals with publication bias. This method, p-uniform, enables (a) testing of publication bias, (b) effect size estimation, and (c) testing of the null-hypothesis of no effect. No current method for meta-analysis pos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
420
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 244 publications
(423 citation statements)
references
References 85 publications
(198 reference statements)
2
420
1
Order By: Relevance
“…If a result is inconsistent it is often impossible (in the absence of raw data) to determine whether the test statistic, the degrees of freedom, or the p-value were incorrectly reported. If the test statistic is incorrect and it is used to calculate the effect size for a meta-analysis, this effect size will be incorrect as well, which could affect the outcome of the meta-analysis ; in fact, the misreporting of all kinds of statistics is a problem for meta-analyses; Gotzsche, Hrobjartsson, Maric, & Tendal, 2007;Levine & Hullett, 2002 (Simonsohn, Nelson, & Simmons, 2014) and p-uniform (van Assen, van Aert, & Wicherts, 2015). Moreover, Wicherts et al (2011) reported that a higher prevalence of reporting errors was associated with a failure to share data upon request.…”
Section: Most Conclusion In Psychology Are Based On the Results Of Nmentioning
confidence: 99%
“…If a result is inconsistent it is often impossible (in the absence of raw data) to determine whether the test statistic, the degrees of freedom, or the p-value were incorrectly reported. If the test statistic is incorrect and it is used to calculate the effect size for a meta-analysis, this effect size will be incorrect as well, which could affect the outcome of the meta-analysis ; in fact, the misreporting of all kinds of statistics is a problem for meta-analyses; Gotzsche, Hrobjartsson, Maric, & Tendal, 2007;Levine & Hullett, 2002 (Simonsohn, Nelson, & Simmons, 2014) and p-uniform (van Assen, van Aert, & Wicherts, 2015). Moreover, Wicherts et al (2011) reported that a higher prevalence of reporting errors was associated with a failure to share data upon request.…”
Section: Most Conclusion In Psychology Are Based On the Results Of Nmentioning
confidence: 99%
“…Hence, we expect little p-hacking and substantial evidence of false negatives in reported gender effects in psychology. We apply the Fisher test to significant and nonsignificant gender results to test for evidential value (van Assen, van Aert, & Wicherts, 2015;Simonsohn, Nelson, & Simmons, 2014). More precisely, we investigate whether evidential value depends on whether or not the result is statistically significant, and whether or not the results were in line with expectations expressed in the paper.…”
Section: Discussionmentioning
confidence: 96%
“…Statistically nonsignificant results were transformed with Equation 1; statistically significant p-values were divided by alpha (.05; van Assen, van Aert, & Wicherts, 2015;Simonsohn, Nelson, & Simmons, 2014). …”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations