2018
DOI: 10.31234/osf.io/w94ep
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How to detect publication bias in psychological research? A comparative evaluation of six statistical methods

Abstract: Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
49
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(52 citation statements)
references
References 39 publications
3
49
0
Order By: Relevance
“…The contour-enhanced funnel plots (Figure 4) did not show any over-representation of marginally significant or just-significant effects in any case. However, it should be acknowledged that STATISTICAL POWER IN GERONTOLOGY 21 many tests for publication bias, including trim-and-fill (Duvall & Tweedie, 2000), p-curve Simmons, 2014), andp-uniform (van Assen, van Aert, &Wicherts, 2015) are inaccurate when true effect sizes are heterogeneous (Renkewitz & Keiner, 2018; van Aert, Wicherts, & van Assen, 2016), as is almost certainly the case in the current study due to the wide range of meta-analyses included.…”
Section: Discussionmentioning
confidence: 71%
“…The contour-enhanced funnel plots (Figure 4) did not show any over-representation of marginally significant or just-significant effects in any case. However, it should be acknowledged that STATISTICAL POWER IN GERONTOLOGY 21 many tests for publication bias, including trim-and-fill (Duvall & Tweedie, 2000), p-curve Simmons, 2014), andp-uniform (van Assen, van Aert, &Wicherts, 2015) are inaccurate when true effect sizes are heterogeneous (Renkewitz & Keiner, 2018; van Aert, Wicherts, & van Assen, 2016), as is almost certainly the case in the current study due to the wide range of meta-analyses included.…”
Section: Discussionmentioning
confidence: 71%
“…Lastly, there is considerable inherent uncertainty associated with the effect size estimates NEGLECT OF PUBLICATION BIAS IN EDUCATIONAL RESEARCH (Carter et al, 2019, Renkewitz & Keiner, 2018van Assen et al, 2015), also leading to lower (sometimes very low) statistical power for detecting more subtle effect sizes. 7…”
Section: Discussionmentioning
confidence: 99%
“…It is indeed difficult to counter that grim feeling. If there is bias in the literature, it has been well demonstrated that unadjusted meta-analysis tends to detect an effect even if there is none, and the current adjustments are imperfect (Carter et al, 2019;Renkewitz & Keiner, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…We will use the weightr package (Coburn & Vevea, 2019) in the R platform. Selection model approaches outperform other traditional methods (e.g., trim-and-fill method; Carter et al, 2019) Notably, a recent simulation bias detection study found that no method of publication bias outperformed the others, and therefore, it is best to choose a method-based upon the data (Renkewitz & Keiner, 2018). Thus, we have outlined many methods to account for that possibility.…”
Section: Publication Biasmentioning
confidence: 99%
“…Thus, we have outlined many methods to account for that possibility. However, additional methods may be used based on heterogeneity or homogeneity of effect sizes once analyses begin (Renkewitz & Keiner, 2018).…”
Section: Publication Biasmentioning
confidence: 99%