2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.252
|View full text |Cite
|
Sign up to set email alerts
|

Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models

Abstract: Capturing the 3D motion of dynamic, non-rigid objects has attracted significant attention in computer vision. Existing methods typically require either mostly complete 3D volumetric observations, or a shape template. In this paper, we introduce a template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporallycoherent shape representation of the object. To this end, we design an online algorithm that alternatively regi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…Furthermore, Xu et al . [XSWL15] approach trajectory‐based shape interpolation by solving Poisson equations defined on a domain mesh. Interpolated shapes are reconstructed from interpolated gradient fields that exploit both point coordinates and surface orientations instead of directly using point coordinates.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, Xu et al . [XSWL15] approach trajectory‐based shape interpolation by solving Poisson equations defined on a domain mesh. Interpolated shapes are reconstructed from interpolated gradient fields that exploit both point coordinates and surface orientations instead of directly using point coordinates.…”
Section: Related Workmentioning
confidence: 99%
“…Such topological changes of the underlying ground truth model and the fact that the measurement process inherently does not guarantee perfectly corresponding points or point numbers for successive point cloud scans cannot be handled by trajectory‐based deformation and morphing approaches. These require exact point correspondences that can only be reliably obtained for small deformations and motions and, hence, are by design not suitable for analysing growth processes with changing topology [XSWL15]. Further limitations of previous work on deformations based on exact point correspondences regarding their applicability on growth processes include the assumption of piecewise rigid motion as used for object tracking [BIZ18], the requirement of a 3D object template that is deformed and fitted to the point clouds in adjacent time steps using respective priors (without enforcing temporal coherence) [ZFG*17], the involvement of a visual hull prior that biases the optimization in the context of mesh‐based approaches [LLV*12] or the need for large databases required by learning‐based methods [WLX*18] that are hard to acquire due to the time‐consuming nature of the scanning process and the growth of the plants themselves.…”
Section: Introductionmentioning
confidence: 99%
“…Their method suffers from flickering effects while still not being able to capture large deformations [38]. A recent work by Xu et al [64] is interesting wherein a complete 3D model, and ultimately a 4D reconstruction, is iteratively built by fusing the non-rigidly deforming partial and low resolution observations and parameters of deformation subspace with the help of the Coherent Point Drift (CPD) algorithm [44]. CPD is a probabilistic nonrigid registration algorithm which is shown to handle arbitrary motions and arbitrary topologies accurately.…”
Section: Related Workmentioning
confidence: 99%
“…The method of Xu et al also has a tendency to suffer from drift due to large deformations. Similar to Xu et al [64], a recent body of work in this domain uses a recursive approach for temporal fusion and incremental construction of high quality 3D reference models without the need to build complete 4D reconstructions. In this vain, Dou and Fuchs, have proposed a recursive template-free scheme, using a multi-view system composed of ten Kinect v1 cameras, which tracks the motion of dynamic human subjects using deformation graphs [21].…”
Section: Related Workmentioning
confidence: 99%
“…Instead, we use projection-mapping and pruning algorithms to render the mixed reality scene in real time. Xu et al [2015] have achieved more robust dynamic 3D reconstruction from a single Kinect sensor by using warp-field or subspaces for the surface deformation. Both techniques warp a reference volume non-rigidly to each new input frame.…”
Section: Fusing Multiple Dynamic Videosmentioning
confidence: 99%