Haven’t all of us dreamt of concluding that our results be statistically significant, that is, characterized by a p-value lying below an arbitrary threshold, most often [Formula: see text]? In this article, we, first, deplore that p has been largely misunderstood, and that its misinterpretation has entailed a fallacious dichotomization and an understatement of the uncertainty prevailing about the effect tested. Next, we introduce and explain a brand-new – direct – measure of the plausibility of the effect under study. Then, we illustrate the relevance of this indicator by revisiting a recently published marketing research case. We also insist on the necessity to contextualize it, using complementary credibility intervals graphically contrasted. Beyond making researchers aware of the exact meaning of test-related probabilities, the delineated approach invites them to formulate their inferences with prudence and modesty acknowledging how uncertain these are.