This paper proposes a new algorithm composition network from the perspective of machine learning, based on an in-depth study of related literature. At the same time, this paper examines the characteristics of music and develops a model for recognising musical emotions. Using the model’s information entropy of pitch and intensity to extract the main melody track, note features are extracted from bar features. Finally, the cosine of the vector included angle is used to judge the similarity between feature vectors of several adjacent sections, allowing the music to be divided into several independent segments. The emotional model of music is used to analyze each segment’s emotion. By quantifying music features, this paper classifies and quantifies music emotion based on the mapping relationship between music features and emotion. Music emotion can be accurately identified by the model. The model’s emotion recognition accuracy is up to 93.78 percent, and the algorithm’s recall rate is up to 96.3 percent, according to simulation results. The recognition method used in this paper has a higher recognition ability than other methods, and the emotion recognition result is more reliable. This paper can not only meet the composer’s auxiliary creative needs, but it can also help intelligent music services.