2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.333
|View full text |Cite
|
Sign up to set email alerts
|

3D Pose from Motion for Cross-View Action Recognition via Non-linear Circulant Temporal Encoding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
103
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 103 publications
(103 citation statements)
references
References 26 publications
0
103
0
Order By: Relevance
“…Lee and Chen [30] first investigated inferring 3D joints from their corresponding 2D projections. Later approaches either exploited nearest neighbors to refine the results of pose inference [18,25] or extracted hand-crafted features [1,23,47] for later regression. Other methods created over-complete bases which are suitable for representing human poses as sparse combinations [2,4,44,62,77].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Lee and Chen [30] first investigated inferring 3D joints from their corresponding 2D projections. Later approaches either exploited nearest neighbors to refine the results of pose inference [18,25] or extracted hand-crafted features [1,23,47] for later regression. Other methods created over-complete bases which are suitable for representing human poses as sparse combinations [2,4,44,62,77].…”
Section: Related Workmentioning
confidence: 99%
“…They are complementary to the others. Remaining works target at exploiting temporal information [11,18,21,57] for 3D pose regression. They are out of the scope of this paper, since we aim at handling the 2D pose from one single image.…”
Section: Related Workmentioning
confidence: 99%
“…4. Specifically, given K human joints with [175] Vector of Joints Conc Lowlv Hand Patsadu et al [176] Vector of Joints Conc Lowlv Hand Huang and Kitani [177] Cost Topology Stat Lowlv Hand Devanne et al [178] Motion Units Conc Manif Hand Wang et al [179] Motion Poselets BoW Body Dict Wei et al [180] Structural Prediction Conc Lowlv Hand Gupta et al [181] 3D Pose w/o Body Parts Conc Lowlv Hand Amor et al [182] Skeleton's Shape Conc Manif Hand Sheikh et al [183] Action Space Conc Lowlv Hand Yilma and Shah [184] Multiview Geometry Conc Lowlv Hand Gong et al [185] Structured Time Conc Manif Hand Rahmani and Mian [186] Knowledge Transfer BoW Lowlv Dict Munsell et al [187] Motion Biometrics Stat Lowlv Hand Lillo et al [188] Composable Activities BoW Lowlv Dict Wu et al [189] Watch-n-Patch BoW Lowlv Dict Gong and Medioni [190] Dynamic Manifolds BoW Manif Dict Han et al [191] Hierarchical Manifolds BoW Manif Dict Slama et al [192,193] Grassmann Manifolds BoW Manif Dict Devanne et al [194] Riemannian Manifolds Conc Manif Hand Huang et al [195] Shape Tracking Conc Lowlv Hand Devanne et al [196] Riemannian Manifolds Conc Manif Hand Zhu et al [197] RNN with LSTM Conc Lowlv Deep Chen et al [198] EnwMi Learning BoW Lowlv Dict Hussein et al [199] Covariance of 3D Joints Stat Lowlv Hand Shahroudy et al [200] MMMP BoW Body Unsup Jung and Hong [201] Elementary Moving Pose BoW Lowlv Dict Evangelidis et al [202] Skeletal Quad Conc Lowlv Hand Azary and Savakis [203] Grassmann Manifolds Conc Manif Hand Barnachon et al [204] Hist. of Action Poses Stat Lowlv Hand Shahroudy et al [205] Feature Fusion BoW Body Unsup Cavazza et al [206] Kernelized-COV Stat Lowlv Hand …”
Section: Representations Based On Raw Joint Positionsmentioning
confidence: 99%
“…5. Gupta et al [181] proposed a cross-view human representation, which matches trajectory features of videos to MoCap joint trajectories and uses these matches to generate multiple motion projections as features. Junejo et al [207] used trajectorybased self-similarity matrices (SSMs) to encode humans observed from different views.…”
Section: Representations Based On Raw Joint Positionsmentioning
confidence: 99%
“…Generating training data We adapt the mocap trajectory generation pipeline of Gupta et al [1], which uses a human model with cylindrical primitives (see Figure 1(b)). Each limb consists of a collection of points that are placed on a 3D surface.…”
Section: Methodsmentioning
confidence: 99%