The main goal of speech recognition technology is to use computers to convert human analog speech signals into computer-generated signals, such as behavior patterns or binary codes. Different from speaker identification and speaker confirmation, the latter attempts to identify or confirm the speaker who uttered the speech rather than the lexical content contained in it. The short-term idea is that it should be able to record the musical sound played by the user with a certain musical instrument, then extract the note and duration information from it, and finally generate the corresponding MID file according to the MIDI standard, which can set the type of musical instrument in advance to complete the function of musical sound transformation, such as playing with a harmonica, and playing the MID at the end is the piano sound. With the rapid development of the mobile Internet, fields such as machine learning, electronic communication, and navigation have placed high demands on real-time and standard text recognition technology. This paper merges the sound of visual music into text-based data set training, uses the exported scanner features for model training, uses the model to extract features, then uses the features for prior training, and then uses pretraining. DNN results show that the combined training of target prevention and expansion plans, by replacing long-term and short-term memory networks, end-to-end speech recognition programs, and behavioral tests organized by mobile devices, can provide a larger receptive field combined with expanded convolution instead of long and short periods. The experimental results show that when the input sampling point is 2400, it can be seen that the convergence speed of the model becomes slower with more than 90 iterations and the loss of the model on the verification set increases with the increase in the number of iterations. This shows that the model in this paper can fully meet the needs of speech recognition in piano music scenes.