2021
DOI: 10.1037/met0000300
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes.

Abstract: Forthcoming in Psychological Methods. This paper is not the version of record and may not exactly replicate the final, published version of the article. The final article will be available, upon publication via its DOI. Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such tech… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
234
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 269 publications
(264 citation statements)
references
References 75 publications
2
234
0
Order By: Relevance
“…Figure 4 presents a forest plot for this analysis. Egger's regression test (incorporating RVE per Rodgers and Pustejovsky, 2020), indicated no evidence of small study bias ( = 0.34, p = 0.244; see Panel B in Figure 2 for a contour-enhanced funnel plot). Because influence diagnostics did not reveal any outliers, a leaveone-out analysis was not conducted.…”
Section: Correlations Between Self-report and Logged Measuresmentioning
confidence: 97%
“…Figure 4 presents a forest plot for this analysis. Egger's regression test (incorporating RVE per Rodgers and Pustejovsky, 2020), indicated no evidence of small study bias ( = 0.34, p = 0.244; see Panel B in Figure 2 for a contour-enhanced funnel plot). Because influence diagnostics did not reveal any outliers, a leaveone-out analysis was not conducted.…”
Section: Correlations Between Self-report and Logged Measuresmentioning
confidence: 97%
“…The previous methods assume independent effect sizes in their original formulation. A way to account for dependence is to combine the all the effect sizes coming from the same sample generating an average estimate for each study, and conduct the classic methods on these aggregated estimates (Rodgers & Pustejovsky, 2020). In addition, some recent approaches directly handle the issue of dependence.…”
Section: Publication Biasmentioning
confidence: 99%
“…In addition, some recent approaches directly handle the issue of dependence. For instance, the logic of PET-PEESE and other regression-based methods can be extended to multilevel models and RVE (Fernández-Castilla et al, 2019;Friese et al, 2017;Rodgers & Pustejovsky, 2020). Mathur and VanderWeele (2020) also proposed a sensitivity analysis that can be fitted with RVE.…”
Section: Publication Biasmentioning
confidence: 99%
“…Egger's regression tests for dependent effect sizes and funnel plots were used for publication bias detection. While no bias was detected, it should be noted that such tests do not have strong power, especially when the number of articles is small, as in case of our meta-analysis [34].…”
Section: Publication Bias and Sensitivity Analysesmentioning
confidence: 73%
“…We contacted the authors of unpublished studies (with up to three email attempts) and discovered that most of them had indeed been published. We tested for publication bias using modification of Egger's regression test for dependent effect sizes [34].…”
Section: Risk Of Biasmentioning
confidence: 99%