2010
DOI: 10.1016/j.patrec.2009.11.017
|View full text |Cite
|
Sign up to set email alerts
|

View-independent human action recognition with Volume Motion Template on single stereo camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(37 citation statements)
references
References 17 publications
0
37
0
Order By: Relevance
“…Using stereo, the existing methods typically try to make the algorithm insensitive to a camera viewpoint [11]. Similarly [12] uses a special room and a multi-camera setup to construct viewpoint invariant action representation, and [13] incorporate temporal information to the multi-view setup.…”
Section: Introductionmentioning
confidence: 99%
“…Using stereo, the existing methods typically try to make the algorithm insensitive to a camera viewpoint [11]. Similarly [12] uses a special room and a multi-camera setup to construct viewpoint invariant action representation, and [13] incorporate temporal information to the multi-view setup.…”
Section: Introductionmentioning
confidence: 99%
“…Figure 9 shows disparity maps for an action sequence, with 100 levels obtained between the tip of hand (last image) and background. The quality of results demonstrate that our setup can be used in applications such as autonomous navigation in robots [36], scene understanding [5] and action recognition [33]. For robotic applications, our system can be more effective than using multiple cameras due to limited space and resources on a mobile platform.…”
Section: More Resultsmentioning
confidence: 97%
“…Here we describe the use of the system for 3D reconstruction which has wide ranging applications in 3D immersive technology [32], face and expression recognition [1,2], action recognition [33], archeology [34]. Many of the above applications demand portability without sacrificing the quality of the reconstruction.…”
Section: D Reconstructionmentioning
confidence: 99%
“…This work was extended in [33] where MHI and two appearance-based features namely foreground image and histogram of oriented gradients (HOG) were combined for action representation, followed by simulated annealing multiple instance learning support vector machine (SMILE-SVM) for action classification. The method proposed in [34] also extended the [32] from 2D to 3D space for view-independent human action recognition using a volume motion template. The experimental results using these techniques are presented in Table 1.…”
Section: Space-time-based Approachesmentioning
confidence: 99%