Labanotation is one of the well-known notation systems for the documenting and archiving of human motion. It plays a powerful role in dance protection, choreography analysis, and so on. Recently, researchers are committed to using computer technology to automatically generate Labanotation rather than manually drawing. However, the existing generation methods cannot deal with the various changes in motion data, such as different scales, angles, motion modes and limbs. In this paper, we aim to generate Labanotation from motion capture data acquired through real folk dance performances. The main steps include feature extraction, motion segmentation and unit movement analysis. Firstly, a normalized feature named Lie group feature is extracted, which can cope with the challenges of different scales and angles in motion data. Secondly, in order to divide motion with different modes into unit fragments for further recognizing, we propose a segmentation method that combines the speed threshold and the region partition. Thirdly, to generate Laban symbols of unit movements for different limbs, two kinds of neural networks are used for the analysis. On the one hand, LieNet, a powerful network for analyzing time series data based on Lie group structure, is utilized to recognize the lower limb movements. On the other hand, extreme-learning machine, a single hidden layer feedforward neural network, is used to identify the upper limb postures. Experimental results demonstrate that our method of feature extraction, motion segmentation and unit movement analysis achieves better results than the previous works, which makes the generated Labanotation score more reliable.