The rapid development of Internet technology has promoted the vigorous development of the multimedia. As one of the most classic instruments, the violin has been fully developed in its creation, education, and performance. In the face of more and more violin performances, the effective organization and retrieval of these musical works is an urgent problem to be solved, while it is common to classify and organize music based on the emotional properties of the performance. Deep learning is a model based on feature hierarchy and unsupervised feature learning, which has strong learning ability and adaptability. Based on the recurrent neural network (RNN) method, long short-term memory (LSTM) is one of the classic models of deep learning that can effectively learn the characteristics of time series data and achieve effective predictions. Therefore, based on the classical Hevner emotion classification model, this paper proposes an emotion recognition method for dynamic violin performances based on LSTM, which selects acoustic features and classifies the audio acoustic signals contained in the violin performances. In order to verify the effectiveness of this method, this paper carries out data labeled, feature selection, and model test on the actual violin music data by turn. The results show that the proposed method can greatly reduce the training time and improve the prediction accuracy, which reached 83%, higher than the existing methods. Meanwhile, the accuracy and iteration times of violin playing music of different emotional categories are also counted. Moreover, the method is robust to the genre, timbre, and noise changes, and the emotion recognition effect is superior.