“…This overemphasis is substantiated by the finding that more than 90% of results in the psychological literature are statistically significant (Open Science Collaboration, 2015;Sterling, Rosenbaum, & Weinkam, 1995;Sterling, 1959) despite low statistical power due to small sample sizes (Cohen, 1962;Sedlmeier, & Gigerenzer, 1989;Marszalek, Barber, Kohlhart, & Holmes, 2011;Bakker, van Dijk, & Wicherts, 2012). Consequently, publications have become biased by overrepresenting statistically significant results (Greenwald, 1975), which generally results in effect size overestimation in both individual studies (Nuijten, Hartgerink, van Assen, Epskamp, & Wicherts, 2015) and meta-analyses (van Assen, van Aert, & Wicherts, 2015;Lane, & Dunlap, 1978;Rothstein, Sutton, & Borenstein, 2005;Borenstein, Hedges, Higgins, & Rothstein, 2009). The overemphasis on statistically significant effects has been accompanied by questionable research practices (QRPs; John, Loewenstein, & Prelec, 2012) such as erroneously rounding p-values towards significance, which for example occurred for 13.8% of all p-values reported as "p = .05" in articles from eight major psychology journals in the period 1985-2013 (Hartgerink, van Aert, Nuijten, Wicherts, & van Assen, 2016).…”