R ecent years have witnessed re markable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream priva cypreserving techniques, differential pri vacy provides a promising way to prevent the leaking of individuallevel privacy in training data while preserving the quality of training data for model building. This work provides a comprehensive survey on the existing works that incorporate differ ential privacy with machine learning, socalled differentially private machine learning and categorizes them into two broad categories as per different differen tial privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mecha nism. In the former, a calibrated amount of noise is added to the nonprivate model and in the latter, the output or the objective function is perturbed by ran dom noise. Particularly, the survey covers the techniques of differentially private deep learning to alleviate the recent con cerns about the privacy of big data con tributors. In addition, the research challenges in terms of model utility, priva cy level and applications are discussed. To tackle these challenges, several potential future research directions for differentially private machine learning are pointed out.