Deep learning techniques based on the neural network have made significant achievements in various fields of artificial intelligence. However, model training requires large‐scale data sets, these data sets are crowd‐sourced and model parameters will contain the encoding of private information, resulting in the risk of privacy leakage. With the trend toward sharing pretrained models, the risk of stealing training data sets through member inference attacks and model inversion attacks is further heightened. To tackle the privacy‐preserving problems in deep learning tasks, we propose an improved Differential Privacy Stochastic Gradient Descent algorithm, using Simulated Annealing algorithm and Laplace Smooth denoising mechanism to optimize the allocation method of privacy loss, replacing the constant clipping method with adaptive gradient clipping method to improve model accuracy. we also analyze privacy cost under random shuffle data batch processing method in detail within the framework of Subsampled Rényi Differential Privacy. Compared with the existing privacy protection training methods with fixed parameters and dynamic privacy parameters in classification tasks, our implementation and experiments show that we can use less privacy budget train deep neural networks with the nonconvex objective function, obtain a higher model evaluation, and have almost zero additional cost in terms of model complexity, training efficiency, and model quality.