Aerobics is full of charm, and music plays an inestimable role in it. With the penetration of music in aerobics, the “sound” of music art is introduced into the “shape” of aerobics movements, and the visual art and visual experience are perfectly combined, which greatly expands the extension and extension of aerobics. This paper proposes an aerobics music adaptation recommendation algorithm that combines classification and collaborative filtering. First, by calculating the similarity of the user context information, the collaborative filtering algorithm obtains the initial annihilation grass music recommendation list; then the classification model is trained by the machine learning algorithm to obtain the user’s aerobics music type preference in a specific context; finally, collaborative filtering The obtained recommendation list is integrated with the aerobics music preference obtained by the classification model to provide personalized aerobics music adaptation recommendations for users in specific situations. In the specific aerobics music adaptation recommendation, the algorithm is implemented by a deep neural network composed of an independent cyclic neural network algorithm and an attention mechanism. In the data preprocessing stage, the audio of the user’s listening history is preprocessed by scattering transformation. The audio features of the user’s listening history are extracted by scattering transformation, and then this feature is combined with the user’s portrait to obtain a recommendation list through an independent recurrent neural network with a hybrid attention mechanism. The experimental results show that this method can effectively improve the performance of the personalized music recommendation system. Compared with the traditional single algorithm IndRNN and LSTM, the recommendation accuracy is improved by 7.8% and 20.9%, respectively.
With the continuous development of the research in the field of emotion analysis, music, as a common multimodal information carrier in people’s daily life, often transmits emotion through lyrics and melody, so it has been gradually incorporated into the research category of emotion analysis. The fusion classification model based on CNN-LSTM proposed in this paper effectively improves the accuracy of emotional classification of audio and lyrics. At the same time, in view of the problem that the traditional decision-level fusion method ignores the correlation between modes and the limitations of dataset, this paper further improves the existing Thayer dimension emotional decision fusion method, takes the audio energy axis data as the main discrimination basis, and improves the accuracy of decision fusion classification. Based on the results of music emotion analysis, this paper further carries out the task of music generation. Based on the feature that there is often consistent emotional expression between music words and songs, a dual Seq2Seq framework based on reinforcement learning is constructed. By introducing the reward value of emotional consistency and content fidelity, the output melody has the same emotion with the input lyrics and good results are achieved. Compared with the ordinary Seq2Seq, the accuracy of our proposed model is improved by about 1.1%. This shows that the accuracy of the model can be effectively improved by using reinforcement learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.