2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00272
|View full text |Cite
|
Sign up to set email alerts
|

FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes

Abstract: Assuming that scenes are static is common in SLAM research. However, the world is complex, dynamic, and features interactive agents. Mobile robots operating in a variety of environments in real-life scenarios require an advanced level of understanding of their surroundings. Therefore, it is crucial to find effective ways of representing the world in its dynamic complexity, beyond the geometry of static scene elements. We present a framework that enables incremental reconstruction of semantically-annotated 3D m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…2, these methods are distinguished from offline methods as they usually take unposed RGB‐D sequences as input, and leverage surfels or voxel grids as the scene representation to enable real‐time reconstruction and tracking. One line of work [BLL19, LZNH20,BLL22] exploits semantic instance segmentation models to decompose each observed RGB‐D frame into several dynamic parts along with a static background, performing tracking and fusion on each segmented surface independently. In contrast to the aforementioned methods, STAR‐no‐prior [CB22] reverses the order of segmentation and reconstruction.…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
“…2, these methods are distinguished from offline methods as they usually take unposed RGB‐D sequences as input, and leverage surfels or voxel grids as the scene representation to enable real‐time reconstruction and tracking. One line of work [BLL19, LZNH20,BLL22] exploits semantic instance segmentation models to decompose each observed RGB‐D frame into several dynamic parts along with a static background, performing tracking and fusion on each segmented surface independently. In contrast to the aforementioned methods, STAR‐no‐prior [CB22] reverses the order of segmentation and reconstruction.…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
“…Datasets cover a thorough list of probable situations, such as stereo, rgb-d, imu sensors, fast motion, dynamic objects, illumination changes and sensor degradation captured on ground robots, drones, handheld devices and even on synthesized data. The algorithms that were tested include ORBSLAM2 [29], ORBSLAM3 [28], OpenVINS [40], FullFusion [41], ReFusion [42] and ElasticFusion [43]. Their conclusion is that ORBSLAM3 provides the best balance between the various conditions of illumination, rapid changes, and dynamic objects.…”
Section: Performance Analysismentioning
confidence: 99%
“…Static and dynamic geometry -A few recent works combine ideas from the approaches discussed above to capture scenes as completely as possible. The most similar systems to our work are SplitFusion [16] and FullFusion [17]. Split-Fusion uses an instance segmentation neural network to split the input into rigid and non-rigid frames, then reconstructs the geometry.…”
Section: Related Workmentioning
confidence: 99%
“…4) Matching or improving upon state-of-the-art results on pose estimation in dynamic environments. [6] x Voxel GPU ElasticFusion [7] x Surfel GPU InfiniTAM [8] x Voxel x CPU or GPU SuperEight [9] x Octree CPU FlashFusion [10] x Octree CPU DynamicFusion [11] x Voxel GPU SurfelWarp [12] x Surfel GPU SemanticFusion [13] x x Surfel GPU Kimera [14] x x Mesh CPU FlowFusion [15] x Voxel GPU x SplitFusion [16] x x Voxel GPU FullFusion [17] x…”
Section: Introductionmentioning
confidence: 99%