Since AlphaGo beat the world Go champion in 2016, which attracted wide attention, the neural network has become more and more popular in recent years, and people’s research on it has gradually improved and been applied in different fields. Today, artificial intelligence and machine learning have become an essential part of modern society and intelligent systems. We do image recognition, speech recognition, and visual learning, closely related to machine learning. However, in machine learning, unsatisfactory training results or even training failure is always encountered. Therefore, in machine learning, it is imperative to improve the accuracy of neural network training results. In this paper, speech recognition, image processing, MNIST, and other classical neural network models will be used to set the training parameters of the neural network better and improve the accuracy of training through voting, quantization, restart, and other methods. The part of research is aiming to find the relationship between restart numbers on the training process and the total extent of learning improvement. At the same time, several algorithms on utilizing these restart numbers are to be compared and selected. Finally, the conclusion is drawn that the more restart made in the training process with a convolutional neural network, the less profit on accuracy improvement we gain from restarting the process.