Stochastic optimization methods, such as genetic algorithms, search for the global minimum of the misfit function within a given parameter range and do not require any calculation of the gradients of the misfit surfaces. More importantly, these methods collect a series of models and associated likelihoods that can be used to estimate the posterior probability distribution. However, because genetic algorithms are not a Markov chain Monte Carlo method, the direct use of the genetic-algorithm-sampled models and their associated likelihoods produce a biased estimation of the posterior probability distribution. In contrast, Markov chain Monte Carlo methods, such as the Metropolis-Hastings and Gibbs sampler, provide accurate posterior probability distributions but at considerable computational cost. In this paper, we use a hybrid method that combines the speed of a genetic algorithm to find an optimal solution and the accuracy of a Gibbs sampler to obtain a reliable estimation of the posterior probability distributions. First, we test this method on an analytical function and show that the genetic algorithm method cannot recover the true probability distributions and that it tends to underestimate the true uncertainties. Conversely, combining the genetic algorithm optimization with a Gibbs sampler step enables us to recover the true posterior probability distributions. Then, we demonstrate the applicability of this hybrid method by performing one-dimensional elastic full-waveform inversions on synthetic and field data. We also discuss how an appropriate genetic algorithm implementation is essential to attenuate the "genetic drift" effect and to maximize the exploration of the model space. In fact, a wide and efficient exploration of the model space is important not only to avoid entrapment in local minima during the genetic algorithm optimization but also to ensure a reliable estimation of the posterior probability distributions in the subsequent Gibbs sampler step