“…Because statistical power is a function of multiple factors, the problem may be less severe in domains (such as psychophysics) that commonly feature low intrasubject variability, within-subjects designs, and multiple measurement trials per subject (Rouder & Haaf, 2017). Inadequate statistical power, coupled with Bakker and Wicherts (2014); John et al 2012Incorrect calculation of effect sizes (e.g., using erroneous formulas) Hardwicke et al 2018 2012; Kerr (1998); Wagenmakers, Wetzels, Borsboom, Maas, and van der Kievit (2012) Incorrectly concluding that a nonsignificant outcome means that there is "no effect" Dienes (2014); Finch, Cumming, and Thomason (2001); Sedlmeier and Gigerenzer (1989) Assuming that the difference between significant and not significant is itself significant or analyzing interactions erroneously Nieuwenhuis, Forstmann, and Wagenmakers (2011); Gelman and Stern (2006) publication bias, can lead to inflated effect-size estimates and increases the likelihood of false negatives and false discoveries (Button et al, 2013;Fraley & Vazire, 2014;Ioannidis, 2005). Survey evidence and examination of articles' Method sections suggests that many psychologists choose sample sizes on the basis of typical practice in their domains of research rather than formal power analysis (Sedlmeier & Gigerenzer, 1989;Vankov et al, 2014).…”