Digitalization is an important direction for the transformation of higher education, but the teaching of music majors still has problems, such as outdated methods and rigid models. In order to explore the digital transformation path of music education in colleges and universities, this paper designs and constructs a music emotion recognition model, which provides technical support solutions for the video-singing audition, appreciation of famous artists, and practice and correction of errors in the music classroom. Through feature extraction, the form of music expression emotion is transformed into music emotion features, using the wrapping method and filtering method for feature selection, using self-attention BiLSTM algorithm to establish the interoperability relationship between features and emotion model, and recognizing music emotion. The constructed model is utilized to aid in the teaching of music majors at a university as a teaching experiment object, and the teaching effect is examined. It was found that after the model-assisted teaching, the experimental class’s scores of music emotion comprehension ability, work element differentiation ability, emotion expression ability, and music psychology ability were all improved to different degrees, and compared with the pre-test scores showed a strongly significant difference (sig.<0.05). A significant improvement of 20.44% was seen in the total average score, which was 94.75. After the use of model-assisted teaching and the digitalization of music education, the quality of teaching in the experimental class was significantly improved, and this paper’s exploration of the digital transformation path of music education in colleges and universities was fruitful. This study has conducted an innovative exploration of the integration of modern information technology and music education, as well as the digital teaching reform of music majors in colleges and universities, and achieved good results.