Götz et al. (2021) argue that small effects are the indispensable foundation for a cumulative psychological science. Whilst we applaud their efforts to bring this important discussion to the forefront, we argue that their core arguments do not hold up under scrutiny, and if left uncorrected have the potential to undermine best practices in reporting and interpreting effect size estimates. Their article can be used as a convenient blanket defense to justify ‘small’ effects as meaningful. In our reply, we first argue that comparisons between psychological science and genetics are fundamentally flawed because these disciplines have vastly different goals and methodology. Second, we argue that p-values, not effect sizes, are the main currency for publication in psychology, meaning that any biases in the literature are caused by this pressure to publish statistically significant results, not a pressure to publish large effects. Third, we contend that claims regarding small effects as important and consequential must be supported by empirical evidence, or at least require a falsifiable line of reasoning. Finally, we propose that researchers should evaluate effect sizes in relative, not absolute terms, and provide several approaches of how this can be achieved.