The automatic induction of machine learning models capable of addressing supervised learning, feature selection, clustering and reinforcement learning problems requires sophisticated intelligent search procedures. These searches are usually performed in the possible model structure spaces, leading to combinatorial optimization problems, and in the parameter spaces, where it is necessary to solve continuous optimization problems. This paper reviews how the estimation of distribution algorithms, a kind of evolutionary algorithm, can be used to address these problems. Topics include preprocessing, mining association rules, selecting variables, searching for the optimal supervised learning model (both probabilistic and nonprobabilistic models), finding the best hierarchical, partitional or probabilistic clustering, obtaining the optimal policy in reinforcement learning and performing inference and structural learning in Bayesian networks for association discovery. Interesting guidelines for future work in this area are also provided.