2022
DOI: 10.15626/mp.2021.2720
|View full text |Cite
|
Sign up to set email alerts
|

Z-curve 2.0: Estimating Replication Rates and Discovery Rates

Abstract: Selection for statistical significance is a well-known factor that distorts the published literature and challenges the cumulative progress in science. Recent replication failures have fueled concerns that many published results are false-positives. Brunner and Schimmack (2020) developed z-curve, a method for estimating the expected replication rate (ERR) – the predicted success rate of exact replication studies based on the mean power after selection for significance. This article introduces an extension of t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
52
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(54 citation statements)
references
References 47 publications
2
52
0
Order By: Relevance
“…However, reporting only effect sizes and their CIs, and full information about the pre-study power calculations, might not be enough. With the aim of facilitating cumulative scientific knowledge through meta-analysis [ 75 , 76 ], and the use of other statistical methods such as z -curve/ p -curve [ 44 , 84 ] or BUCSS to conduct power calculations adjusting for publication bias and uncertainty around parameter estimates [ 62 ], it has been suggested that besides sample size per condition, means, SDs and exact p -values, studies should also disclose F -ratio or t -statistics, the type of design, and the correlations between dependent observations for within-subjects designs [ 76 ], but it appears that this is rarely achieved. The compounding issues of poor reporting practices are easy to demonstrate with two examples; firstly, consider a within-subject design (i.e.…”
Section: Methodological Issuesmentioning
confidence: 99%
“…However, reporting only effect sizes and their CIs, and full information about the pre-study power calculations, might not be enough. With the aim of facilitating cumulative scientific knowledge through meta-analysis [ 75 , 76 ], and the use of other statistical methods such as z -curve/ p -curve [ 44 , 84 ] or BUCSS to conduct power calculations adjusting for publication bias and uncertainty around parameter estimates [ 62 ], it has been suggested that besides sample size per condition, means, SDs and exact p -values, studies should also disclose F -ratio or t -statistics, the type of design, and the correlations between dependent observations for within-subjects designs [ 76 ], but it appears that this is rarely achieved. The compounding issues of poor reporting practices are easy to demonstrate with two examples; firstly, consider a within-subject design (i.e.…”
Section: Methodological Issuesmentioning
confidence: 99%
“…All data analysis was performed in R (v4.1.0) with packages: dplyr, ggplot2. Z curves were generated following the method described by Brunner, Schimmack and Bartoš [ 20 , 21 ], employing the R package zcurve. Literature search, data extraction, analysis and reporting were performed in accordance with the PRISMA and COSMOS-E statements [ 22 , 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…Based on these assumptions, z-curve takes a set of z-transformed p-values and models them as a mixture of truncated folded normal distributions (for details, see Bartoš & Schimmack, 2022) to calculate several indexes of interest.…”
Section: Analytic Strategy: Z-curve Analysismentioning
confidence: 99%
“…Finally, it is also worth mentioning that TES tests for publication bias but cannot quantify the amount of bias (Bartoš & Schimmack, 2022).…”
mentioning
confidence: 99%
See 1 more Smart Citation