2011
DOI: 10.1007/s10044-011-0239-5
|View full text |Cite
|
Sign up to set email alerts
|

Three-dimensional action recognition using volume integrals

Abstract: This work proposes the volume integral (VI) as a new descriptor for three-dimensional action recognition. The descriptor transforms the actor's volumetric information into a two-dimensional representation by projecting the voxel data to a set of planes that maximize the discrimination of actions. Our descriptor significantly reduces the amount of data of the three-dimensional representations yet preserves the most important information. As a consequence, the action recognition process is greatly speeded up whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Further, they formulate the task of multi-view human action recognition as a learning problem penalized by a graph structure that is built according to the human body structure. Work [4] proposes the volume integral as a new descriptor for threedimensional action recognition. The descriptor transforms the actor's volumetric information into a two-dimensional representation by projecting the voxel data to a set of planes that maximize the discrimination of actions.…”
Section: Introductionmentioning
confidence: 99%
“…Further, they formulate the task of multi-view human action recognition as a learning problem penalized by a graph structure that is built according to the human body structure. Work [4] proposes the volume integral as a new descriptor for threedimensional action recognition. The descriptor transforms the actor's volumetric information into a two-dimensional representation by projecting the voxel data to a set of planes that maximize the discrimination of actions.…”
Section: Introductionmentioning
confidence: 99%
“…These methods do not require accurate background subtractions but make use of acquired, inconstant features that need strategy and descriptors to manage. Volume-based representations are modeled by stacks of silhouettes, shapes, or surfaces that use several frames to build a model, such as space-time silhouettes from shape history volume [ 32 ], geometric properties from continuous volume [ 33 ], spatial-temporal shapes from 3D point clouds [ 34 ], spatial-temporal features of shapelets from 3D binary cube space-time [ 35 ], affine invariants with SVM [ 36 ], spatial-temporal micro volume using binary silhouettes [ 37 ], integral volume of visual-hull and motion history volume [ 38 ], and saliency volume from luminance, color, and orientation components [ 39 ]. These methods acquire a detailed model but must deal with high dimensions of features which require accurate human segmentation without the background.…”
Section: Introductionmentioning
confidence: 99%