those motion data to create realistic motions for new characters by applying motion editing [1,2], motion synthesis [3][4][5] and motion retargeting [6] techniques has become the focus of research in the past several years. However, prior to reusing and processing the old motions, one fundamental problem of identifying and extracting similar motion clips from the database has to be solved. It is essentially a motion matching problem. The general procedure is to calculate a concise and representative feature for each motion, and then compare the similarities with all other motions in the mocap database. The efficiency and accuracy of these motion retrieval and annotation processes largely depend on the property of the used features.Most early work [7] uses textual description such as running, walking, to label existing motions in a database. It does not only involve a lot of manual work, the textual label is also too short and general to fully represent the features of each motion. Later works [8][9][10][11] use the numeric-based features and logic-based features involving the 3D coordinates of each joint in each frame. It includes too much redundant information and the 'huge' feature makes the motion matching really slow. Some recent works [12] present semantic features which better represent the essence of motions and the low dimension of features largely speeds up the motion retrieval process. In this paper, we present a new feature in this category. The work [13] has shown that a human motion clip could be described with some representative poses, which we call the 'key-pose'. Intuitively, two similar motions may share most key-poses, while the motions belonging to different motion classes may share none or only a few key-poses (as shown in Fig. 1). A good selection of key-poses can be used to represent different motion classes. The second benefit to use key-poses as a feature is, although the category of motions is infinite, the types of key-poses are relatively limited. A new category Abstract Using motion capture to create naturally looking motion sequences for virtual character animation has become a standard procedure in the games and visual effects industry. With the fast growth of motion data, the task of automatically annotating new motions is gaining an importance. In this paper, we present a novel statistic feature to represent each motion according to the pre-labeled categories of key-poses. A probabilistic model is trained with semi-supervised learning of the Gaussian mixture model (GMM). Each pose in a given motion could then be described by a feature vector of a series of probabilities by GMM. A motion feature descriptor is proposed based on the statistics of all pose features. The experimental results and comparison with existing work show that our method performs more accurately and efficiently in motion retrieval and annotation.