In this paper, a dynamically regularized harmony learning (DRHL) algorithm is proposed for Gaussian mixture learning with a favourite feature of both adaptive model selection and consistent parameter estimation. Specifically, under the framework of Bayesian Ying-Yang (BYY) harmony learning, we utilize the average Shannon entropy of the posterior probability per sample as a regularization term being controlled by a scale factor to the harmony function on Gaussian mixtures increasing from 0 to 1 dynamically. It is demonstrated by the experiments on both synthetic and realworld datasets that the DRHL algorithm can not only select the correct number of actual Gaussians in the dataset, but also obtain the maximum likelihood (ML) estimators of the parameters in the actual mixture. Moreover, the DRHL algorithm is scalable and can be implemented on a big dataset.