Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition methods are supposed to have the same camera view during both training and testing. And thus performances of these single-view approaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above problem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based on multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step, subvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids sampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are built upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview subvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned representation. Experiments conducted on the multiview IXMAS action dataset illustrate that the proposed method can effectively recognize human actions depicted in multiview videos.
Action prediction aims to infer the category of an action before it is fully executed. It is a challenging task since neither sufficient discriminative information nor the definite progress state of action can be obtained in an incomplete video. In this paper, we propose a novel double-layer learning framework for predicting the category of action from partial observations. Particularly, in the first layer of the framework, an unsupervised semantic reasoning method is presented for exploiting semantic information of an input incomplete video as well as inferring the future semantic information using the prior knowledge provided by training full videos. In the second layer of the framework, a discriminative action prediction model introduces a latent variable to indicate the progress state of the input video and captures the relationship among the actions, video observations, the semantic information, and the latent progress state for predicting the action label of the input video. Extensive experimental results on UT-I #1, UT-I #2, and UCF Sports datasets demonstrate the superiority of our method in predicting actions at the early stage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.