2014
DOI: 10.1007/978-3-319-10605-2_48
|View full text |Cite
|
Sign up to set email alerts
|

HOPC: Histogram of Oriented Principal Components of 3D Pointclouds for Action Recognition

Abstract: Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Princ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
140
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 157 publications
(140 citation statements)
references
References 35 publications
0
140
0
Order By: Relevance
“…In the literature, one can find a number of activity recognition approaches based on image sequences, point clouds or depth maps, where occupancy patterns are calculated [47] or different features are extracted such as spatio-temporal context distribution of interest points [48], histogram of oriented principal components [49] or oriented 4D normals [50], and 3D flow estimation [51]. However, the sparsity of Lidar point clouds (versus Kinect) becomes a bottleneck for extracting the above features.…”
Section: Action Recognitionmentioning
confidence: 99%
“…In the literature, one can find a number of activity recognition approaches based on image sequences, point clouds or depth maps, where occupancy patterns are calculated [47] or different features are extracted such as spatio-temporal context distribution of interest points [48], histogram of oriented principal components [49] or oriented 4D normals [50], and 3D flow estimation [51]. However, the sparsity of Lidar point clouds (versus Kinect) becomes a bottleneck for extracting the above features.…”
Section: Action Recognitionmentioning
confidence: 99%
“…Holistic descriptors, namely histogram of oriented 4D normals (HON4D) and histogram of oriented principal components (HOPC) have been exploited respectively in Refs. [42,43]. HON4D is based on the orientation of normal surfaces in 4D while HOPC can represent the geometric characteristics of a sequence of 3D points.…”
Section: Related Work On Rgb-d Sensorsmentioning
confidence: 99%
“…Accuracy Studies employed depth data Action Graph [11] 74.70 HON4D [16] 85.85 Vieira et al [24] 78.20 Random Occupancy Patterns [25] 86.50 HOPC [17] 91.64 JAS(Cosine)+MaxMin+HOG2 [15] 94.84 DMM-LBP-FF [3] 87.90 Studies employed only skeleton data Actionlet Ensemble [27] 88.20 Histogram of 3D Joint [28] 78.97 GB-RBM & HMM [14] 80.20 Points in a Lie Group [23] 89.48 Ensemble classification [2] 84.85 Proposed method 90.57 CAD-60 Accuracy Studies employed depth data MTO-Sparse coding [13] 65.30 Studies employed only skeleton data Actionlet Ensemble [27] 74.70 Sung et al (2012) [21] 51.30 Proposed method 76.67 rival methods on these datasets based on the cross-subject test setting. As can be seen, most studies use depth data in addition to skeleton data; and a few of them have better performance than ours, such as [15] and [17].…”
Section: Msraction3dmentioning
confidence: 99%