All human movements can be effectively represented with labanotation, which is simple to read and preserve. However, manually recording the labanotation takes a long time, so figuring out how to use the labanotation to accurately and quickly record and preserve traditional dance movements is a key research question. An automatic labanotation generation algorithm based on DL (deep learning) is proposed in this study. The BVH file is first analyzed, and the data are then converted. On this foundation, a CNN (convolutional neural network) algorithm for generating the dance spectrum of human lower-limb movements is proposed, which is very good at learning action space information. The algorithm performs admirably in terms of classification and recognition. Finally, a spatial segmentation-based automatic labanotation generation algorithm is proposed. To begin, every frame of data is converted into a symbol sequence using spatial law, resulting in a very dense motion sequence. The motion sequence is then regulated according to the minimum beat of motion obtained through wavelet analysis. To arrive at the final result, the classifier is used to determine whether each symbol is reserved or not. As a result, we will be able to create more accurate dance music for simple human movements.