Emotion is the important information that people transmit in the process of communication, and the change of emotional state affects people’s perception and decision-making, which introduces the emotional dimension into human-computer interaction. The modes of emotional expression include facial expressions, speech, posture, physiological signals, text, and so on. Emotion recognition is essentially a multimodal fusion problem. This paper investigates the different teaching modes of the teachers and students of our school, designs the load capacity through the K-means algorithm, builds a multimedia network sharing classroom, and creates a piano music situation to stimulate students’ learning interest, using audiovisual and other tools to mobilize students’ emotions, using multimedia guidance to extend students’ piano music knowledge, and comprehensively improve students’ aesthetic ability and autonomous learning ability. Comparing the changes of students after 3 months of teaching, the results of the study found that multimedia sharing classrooms can be up to 50% ahead of traditional teaching methods in enhancing students’ interest, and teachers’ acceptance of multimedia network sharing classrooms is also high.
The main goal of speech recognition technology is to use computers to convert human analog speech signals into computer-generated signals, such as behavior patterns or binary codes. Different from speaker identification and speaker confirmation, the latter attempts to identify or confirm the speaker who uttered the speech rather than the lexical content contained in it. The short-term idea is that it should be able to record the musical sound played by the user with a certain musical instrument, then extract the note and duration information from it, and finally generate the corresponding MID file according to the MIDI standard, which can set the type of musical instrument in advance to complete the function of musical sound transformation, such as playing with a harmonica, and playing the MID at the end is the piano sound. With the rapid development of the mobile Internet, fields such as machine learning, electronic communication, and navigation have placed high demands on real-time and standard text recognition technology. This paper merges the sound of visual music into text-based data set training, uses the exported scanner features for model training, uses the model to extract features, then uses the features for prior training, and then uses pretraining. DNN results show that the combined training of target prevention and expansion plans, by replacing long-term and short-term memory networks, end-to-end speech recognition programs, and behavioral tests organized by mobile devices, can provide a larger receptive field combined with expanded convolution instead of long and short periods. The experimental results show that when the input sampling point is 2400, it can be seen that the convergence speed of the model becomes slower with more than 90 iterations and the loss of the model on the verification set increases with the increase in the number of iterations. This shows that the model in this paper can fully meet the needs of speech recognition in piano music scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.