In this paper, we consider a stochastic Model Predictive Control able to account for effects of additive stochastic disturbance with unbounded support, and requiring no restrictive assumption on either independence nor Gaussianity. We revisit the rather classical approach based on penalty functions, with the aim of designing a control scheme that meets some given probabilistic specifications. The main difference with previous approaches is that we do not recur to the notion of probabilistic recursive feasibility, and hence we do not consider separately the unfeasible case. In particular, two probabilistic design problems are envisioned. The first randomization problem aims to design offline the constraint set tightening, following an approach inherited from tube-based MPC. For the second probabilistic scheme, a specific probabilistic validation approach is exploited for tuning the penalty parameter, to be selected offline among a finite-family of possible values. The simple algorithm here proposed allows designing a single controller, always guaranteeing feasibility of the online optimization problem. The proposed method is shown to be more computationally tractable than previous schemes. This is due to the fact that the sample complexity for both probabilistic design problems depends on the prediction horizon in a logarithmic way, unlike scenario-based approaches which exhibit linear dependence. The efficacy of the proposed approach is demonstrated with a numerical example.