2016
DOI: 10.1007/978-3-319-46484-8_22
|View full text |Cite
|
Sign up to set email alerts
|

VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
336
0
1

Year Published

2016
2016
2018
2018

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 276 publications
(337 citation statements)
references
References 44 publications
0
336
0
1
Order By: Relevance
“…Even real-time tracking of general deforming objects [Zollhöfer et al 2014] and template-free reconstruction [Dou et al 2016;Innmann et al 2016;Newcombe et al 2015; has been demonstrated. RGB-D information overcomes forwardbackwards ambiguities in monocular pose estimation.…”
Section: Multi-viewmentioning
confidence: 99%
“…Even real-time tracking of general deforming objects [Zollhöfer et al 2014] and template-free reconstruction [Dou et al 2016;Innmann et al 2016;Newcombe et al 2015; has been demonstrated. RGB-D information overcomes forwardbackwards ambiguities in monocular pose estimation.…”
Section: Multi-viewmentioning
confidence: 99%
“…Some approaches [21,22,29,30,49] have recently been proposed for reconstruction of non-rigid objects and scenes using a single RGBD sensor. Izadi et al [22] and Newcombe et al [30] introduced KinectFusion as a real-time 3D reconstruction approach using a moving Kinect.…”
Section: Related Workmentioning
confidence: 99%
“…This approach was originally designed for rigid scenes, and one of the more well-known examples is KinectFusion [15]. This approach was later extended to handle non-rigid objects by describing the deformation of objects with transformations of signed distance field [9,13,14]. These methods can generate surprisingly high quality 3D shapes, but may lack tracking stability with regards to, e.g., occlusions.…”
Section: Human Shape Reconstructionmentioning
confidence: 99%