Controversy is not new in Statistics. Since the start of the 20th Century, proponents of three theories have claimed superiority. Bayesian theorists mathematically mix subjective theoretical probabilities with the probability of the data. R.A. Fisher reenvisioned Bayes’ theory by eliminating subjective probability and inventing a data-generating probability model called the null hypothesis. With this approach, only the probability of the data can be computed. Subsequently, Neyman-Pearson supplemented Fisher’s null model with alternative data-generating probability models. In this century, massive “omics” data are analyzed with a complex amalgam of computer science, advanced mathematics, statistics, and domain-specific knowledge. This paper does not attempt to predict the future of statistics, unify the three classical statistical theories, argue the superiority of one of the others, propose a new theory, or call for a radical shift to a new paradigm (e.g., qualitative or mixed methods research). The statistical analyses in this paper are grounded in Fisher’s paradigm. Independent samples t-tests were run with simulated data under a true and a false null hypothesis. Statistical significance was evaluated with p-values and substantive significance was determined using Cohen’s “effect size index d.” It is shown with graphs and a few numbers that statistical significance is a viable tool for filtering out effect size errors that would otherwise be misinterpreted as substantively significant. Finally, it is shown that increasing sample size does not improve power under a true null hypothesis – that happens only under a false null hypothesis.