2017
DOI: 10.3390/s17092019
|View full text |Cite
|
Sign up to set email alerts
|

Capturing Complex 3D Human Motions with Kernelized Low-Rank Representation from Monocular RGB Camera

Abstract: Recovering 3D structures from the monocular image sequence is an inherently ambiguous problem that has attracted considerable attention from several research communities. To resolve the ambiguities, a variety of additional priors, such as low-rank shape basis, have been proposed. In this paper, we make two contributions. First, we introduce an assumption that 3D structures lie on the union of nonlinear subspaces. Based on this assumption, we propose a Non-Rigid Structure from Motion (NRSfM) method with kerneli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…Low-rank matrix completion has recently emerged as a powerful tool in computer vision and image processing to recover missing or corrupted data [33][34][35]. Some researchers [25,[36][37][38][39][40][41] used low-rank matrix completion to recover human motion and achieved better recovery results than state-of-the-art methods. Since our proposed algorithm is based on the low-rank matrix technique, we briefly review its formulation in the next paragraph.…”
Section: Introductionmentioning
confidence: 99%
“…Low-rank matrix completion has recently emerged as a powerful tool in computer vision and image processing to recover missing or corrupted data [33][34][35]. Some researchers [25,[36][37][38][39][40][41] used low-rank matrix completion to recover human motion and achieved better recovery results than state-of-the-art methods. Since our proposed algorithm is based on the low-rank matrix technique, we briefly review its formulation in the next paragraph.…”
Section: Introductionmentioning
confidence: 99%