Learning from crowds is a classification problem where the provided training instances are labeled by multiple (usually conflicting) annotators. In different scenarios of this problem, straightforward strategies show an astonishing performance. In this paper, we characterize the crowd scenarios where these basic strategies show a good behavior. As a consequence, this study allows to identify those scenarios where non‐basic methods for combining the multiple labels are expected to obtain better results. In this context, we extend the learning from crowds paradigm to the multidimensional (MD) classification domain. Measuring the quality of the annotators, the presented EM‐based method overcomes the lack of a fully reliable labeling for learning MD Bayesian network classifiers: As the expertise is identified and the contribution of the relevant annotators promoted, the model parameters are optimized. The good performance of our proposal is demonstrated throughout different sets of experiments.