Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based.As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatialtemporal-structural information inherent in video sequence, and discuss potential directions for future research. body parts, in contrast with the few body parts that involved in gesture. An activity is composed by a sequence of actions. An interaction is a type of motion performed by two actors; one actor is human while the other may be human or an object. This implies that the interaction category will include human-human or human-object interaction."Hugging each other" and "playing guitar" are examples of these two kinds of interaction, respectively. Group activity is the most complex type of activity, and it may be a combination of gestures, actions and interactions. Necessarily, it involves more than two humans and from zero to multiple objects. Examples of group activities would include "two teams playing basketball" and "group meeting".Early research on human motion recognition was dominated by the analysis of still images or videos [2,144,132,99,44,176]. Most of these efforts used color and texture cues in 2D images for recognition. However, the task remains challenging due to problems posed by background clutter, partial occlusion, view-point, lighting changes, execution rate and biometric variation. This challenge remains even with current deep learning approaches [49,4].With the recent development of cost-effective RGB-D sensors, such as Microsoft Kinect TM and Asus Xtion TM , RGB-D-based motion recognition has attracted much attention. This is largely because the extra dimension (depth) is insensitive to illumination changes and includes rich 3D structural information of the scene. Additionally, 3D positions of body joints can be estimated from depth maps [114]. As a consequence, several methods based on RGB-D data have been proposed and the approach has proven to be a promising direction for human motion analysis.Several survey papers have summarized the research on human motion recognition...
For one-shot learning gesture recognition, two important challenges are: how to extract distinctive features and how to learn a discriminative model from only one training sample per gesture class. For feature extraction, a new spatio-temporal feature representation called 3D enhanced motion scale-invariant feature transform (3D EMoSIFT) is proposed, which fuses RGB-D data. Compared with other features, the new feature set is invariant to scale and rotation, and has more compact and richer visual representations. For learning a discriminative model, all features extracted from training samples are clustered with the k-means algorithm to learn a visual codebook. Then, unlike the traditional bag of feature (BoF) models using vector quantization (VQ) to map each feature into a certain visual codeword, a sparse coding method named simulation orthogonal matching pursuit (SOMP) is applied and thus each feature can be represented by some linear combination of a small number of codewords. Compared with VQ, SOMP leads to a much lower reconstruction error and achieves better performance. The proposed approach has been evaluated on ChaLearn gesture database and the result has been ranked amongst the top best performing techniques on ChaLearn gesture challenge (round 2).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.