2021
DOI: 10.1111/cgf.142651
|View full text |Cite
|
Sign up to set email alerts
|

RigidFusion: RGB‐D Scene Reconstruction with Rigidly‐moving Objects

Abstract: Figure 1: Given a dynamic scene with rigidly moving objects, RigidFusion performs 4D reconstruction from RGB-D frames (left) and outputs camera motion (shown in green curves), fused object geometries (rendered with light blue and golden yellow), and their respective trajectories (shown in brown/purple curves). Two novel-view reconstructions from two time steps are shown on the middle panel, with frame numbers F i corresponding to different time steps.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…For both i MAP and N ice SLAM, we employ the open‐source network implementation [ZPL*22] in our training framework instead of their multi‐threads SLAM framework, which contains several optimizations (e.g., view‐purging) for real‐time applications. Note that i MAP, with provided background and object segmentation information, can be seen as an upper bound for performance of a method like RigidFusion [WLNM21].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For both i MAP and N ice SLAM, we employ the open‐source network implementation [ZPL*22] in our training framework instead of their multi‐threads SLAM framework, which contains several optimizations (e.g., view‐purging) for real‐time applications. Note that i MAP, with provided background and object segmentation information, can be seen as an upper bound for performance of a method like RigidFusion [WLNM21].…”
Section: Discussionmentioning
confidence: 99%
“…Aggregating raw scans while simultaneously estimating and accounting for underlying camera motion is an established way of acquiring large‐scale geometry of rigid scenes (e.g., KinectFusion [IKH*11], VoxelHash [NZIS13]). This paradigm has been extended for dynamic scenes by simultaneously segmenting and tracking multiple (rigid) objects (e.g., CoFusion [RA17], MaskFusion [RBA18], MidFusion [XLT*19], EmFusion [SS19], RigidFusion [WLNM21]) or, decoupling the handling of objects and human motion (e.g., Mixed‐Fusion [ZX17]). These methods explicitly track and represent geometry, without or with textured colors, do not support joint optimization, and need special handling for multiple objects.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most recent algorithms utilize machine learning on large data sets [4,10] for high quality reconstruction of RGBD camera [2,3,16,35] footage. Yet, high-quality reconstruction algorithms are too slow [37] for real-time communication. To overcome this issue, real-time 3D surface reconstruction has been proposed [8,15,38].…”
Section: Related Workmentioning
confidence: 99%