The application of machine learning technology to intelligent music creation has become a very important field in music creation. The main current research on music intelligent creation methods uses fixed coding steps in audio data, which lead to weak feature expression ability. Based on convolutional neural network theory, this paper proposes a deep music intelligent creation method. The model uses a convolutional recurrent neural network to generate an effective hash code, first preprocesses the music signal to obtain a Mel spectrogram, and then inputs it into a pretrained CNN to extract from its convolutional layers. The network space details and the semantic information of musical symbols are used to construct the feature map sequence using selection strategy for the feature map of each convolutional layer, so as to solve the problem of high data feature dimension and poor recognition performance. In the simulation process, the Mel cepstral coefficient method (MFCC) was used to extract the features of four different music signals, and the features that could represent each signal were extracted through the convolutional neural network, and the continuous signals were discretized and reduced. The experimental results show that the high-dimensional music data are dimensionally reduced at the data level. After the data are compressed, the correct rate of intelligent creation is as high as 98%, and the characteristic signal distortion rate is reduced to 5% below, effectively improving the algorithm performance and the ability to create music intelligently.