2007
DOI: 10.1109/tpami.2007.70711
|View full text |Cite
|
Sign up to set email alerts
|

Actions as Space-Time Shapes

Abstract: Abstract-Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach [14] for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dyna… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
862
5
12

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 1,284 publications
(884 citation statements)
references
References 27 publications
5
862
5
12
Order By: Relevance
“…Our approach has been validated on the multi-view INRIA XMAS (IXMAS) [20] dataset and the single-view Weizmann [21] dataset. The former provides continuous multi-view sequences of different actions performed by the same actor, whereas the latter provides segmented single-view sequences.…”
Section: Resultsmentioning
confidence: 99%
“…Our approach has been validated on the multi-view INRIA XMAS (IXMAS) [20] dataset and the single-view Weizmann [21] dataset. The former provides continuous multi-view sequences of different actions performed by the same actor, whereas the latter provides segmented single-view sequences.…”
Section: Resultsmentioning
confidence: 99%
“…Generally speaking interest points are also filtered via non-maxima suppression, to prevent duplicate entries. Some approaches segment the actor, for example using the Kinect's user mask (Gorelick et al 2007;Li et al 2010;Cheng et al 2012). This enables complex "volumetric" descriptions of the actor's body over time (Yang et al 2012;Wang et al 2012;Vieira et al 2012;Oreifej and Liu 2013).…”
Section: Related Workmentioning
confidence: 99%
“…With these notations and adding aand Flower [19] databases, respectively. After that, the results for face recognition using CMU PIE [20] and FERET [21], and human action recognition using Weizmann [22] and KTH [23] databases, are reported in Sections 4.3 and 4.4, respectively. It is important to point out that the main objective of these experiments is to evaluate the performance of different classifier fusion methods, but not state-of-art digit, flower, face and human action recognition algorithms.…”
Section: Learning the Optimal Radm Modelmentioning
confidence: 99%