In the past two decades, psychological science has experienced an unprecedented replicability crisis which uncovered several problematic issues. Among others, the use and misuse of statistical inference plays a key role in this crisis. Indeed, statistical inference is too often viewed as an isolated procedure limited to the analysis of data that have already been collected. Instead, statistical reasoning is necessary both at the planning stage and when interpreting the results of a research project. Based on these considerations, we build on and further develop an idea proposed by Gelman and Carlin (2014) termed "prospective and retrospective design analysis".Rather than focusing only on the statistical significance of a result and on the classical control of type I and type II errors, a comprehensive design analysis involves reasoning about what can be considered a plausible effect size. Furthermore, it introduces two relevant inferential risks: the exaggeration ratio or Type M error (i.e., the predictable average overestimation of an effect that emerges as statistically significant), and the sign error or Type S error (i.e., the risk that a statistically significant effect is estimated in the wrong direction). design analysis is that it can be usefully carried out both in the planning phase of a study and for the evaluation of studies that have already been conducted, thus increasing researchers awareness during all phases of a research project. To illustrate the benefits of design analysis to the widest possible audience, we use a familiar example in psychology where the researcher is interested in analyzing the differences between two independent groups considering Cohens d as an effect size measure. We examine the case in which the plausible effect size is formalized as a single value, and propose a method in which uncertainty concerning the magnitude of the effect is formalized via probability distributions. Through several examples and an application to a real case study, we show that even though a design analysis requires big effort, it has the potential to contribute to planning more robust and replicable studies. Finally, future developments in the Bayesian framework are discussed.
Keywordsprospective and retrospective design analysis, Type M and Type S errors, effect size, power, psychological research, statistical inference, statistical reasoning, R functions "If statisticians agree on one thing, it is that scientific inference should not be made mechanically." Gigerenzer and Marewski (2015, p. 422) "Accept uncertainty. Be thoughtful, open, and modest.