Temporal invariant shape moments intuitively seem to provide an important visual cue to human activity recognition in video sequences. In this paper, an SVM based method for human activity recognition is introduced. With this method, the feature extraction is carried out based on a small number of computationally-cheap invariant shape moments. When tested on the popular KTH action dataset, the obtained results are promising and compare favorably with that reported in the literature. Furthermore our proposed method achieves real-time performance, and thus can provide latency guarantees to real-time applications and embedded systems.
Temporal shape variations intuitively appear to provide a good cue for human activity modeling. In this paper, we lay out a novel framework for human action recognition based on fuzzy log-polar histograms and temporal self-similarities. At first, a set of reliable keypoints are extracted from a video clip (i.e., action snippet). The local descriptors characterizing the temporal shape variations of action are then obtained by using the temporal self-similarities defined on the fuzzy log-polar histograms. Finally, the SVM classifier is trained on these features to realize the action recognition model. The proposed method is validated on two popular and publicly available action datasets. The results obtained are quite encouraging and show that an accuracy comparable or superior to that of the state-of-the-art is achievable. Furthermore, the method runs in real time and thus can offer timing guarantees to real-time applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.