2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.137
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning on Lie Groups for Skeleton-Based Action Recognition

Abstract: In recent years, skeleton-based action recognition has become a popular 3D classification problem. State-of-theart methods typically first represent each motion sequence as a high-dimensional trajectory on a Lie group with an additional dynamic time warping, and then shallowly learn favorable Lie group features. In this paper we incorporate the Lie group structure into a deep network architecture to learn more appropriate Lie group features for 3D action recognition. Within the network structure, we design rot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
177
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 240 publications
(183 citation statements)
references
References 45 publications
0
177
0
1
Order By: Relevance
“…Beside RNNs and CNNs, some other deep models have also been introduced for 3D human action recognition. Huang et al [62] incorporated Lie group structure into a deep architecture for skeleton-based action recognition. Tang et al [77] applied deep progressive reinforcement learning to distil the informative frames in the video sequences.…”
Section: D Action Recognition By Hand-crafted Featuresmentioning
confidence: 99%
“…Beside RNNs and CNNs, some other deep models have also been introduced for 3D human action recognition. Huang et al [62] incorporated Lie group structure into a deep architecture for skeleton-based action recognition. Tang et al [77] applied deep progressive reinforcement learning to distil the informative frames in the video sequences.…”
Section: D Action Recognition By Hand-crafted Featuresmentioning
confidence: 99%
“…The batch size and the learning rate are set to 30 and 0.01, respectively. The rectification threshold used for the ReEig layer is set to 0.0001 [12]. For gesture classification, we use the LIBLINEAR library [9] with L2regularized L2-loss (dual) to train the classifier and use the default parameter settings in LIBLINEAR (C is set to 1, the tolerance of termination criterion is set to 0.1 and no bias term is added).…”
Section: Methodsmentioning
confidence: 99%
“…Manifold learning approaches operated on different manifolds, e.g. SPD manifolds [11], Lie groups [12], and Grassmann manifolds [13]. Graph learning approaches [25], [34] were based on graph convolution filters and Deep Residual Networks [9] which have shown very promising results on large scale datasets.…”
Section: A Skeleton-based Hand Gesture and Action Recognitionmentioning
confidence: 99%
“…Recently, deep learning on manifolds and graphs has increasingly attracted attention. Approaches following this line of research have also been successfully applied to skeleton-based action recognition [19,20,23,27,56]. By extending classical operations like convolutions to manifolds and graphs while respecting the underlying geometric structure of data, they have demonstrated superior performance over other approaches.…”
Section: Skeleton-based Gesture Recognitionmentioning
confidence: 99%