2017 IEEE International Conference on Robotics and Automation (ICRA) 2017
DOI: 10.1109/icra.2017.7989518
|View full text |Cite
|
Sign up to set email alerts
|

Co-fusion: Real-time segmentation, tracking and fusion of multiple objects

Abstract: Fig. 1: A sequence demonstrating our dynamic SLAM system. Three objects were sequentially placed on a table: first a small bin (blue label), a flask (yellow) and a teddy bear (green). The results show that all objects were successfully segmented, tracked and modeled.Abstract-In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
229
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 203 publications
(229 citation statements)
references
References 26 publications
0
229
0
Order By: Relevance
“…We evaluate the performance of our method qualitatively and quantitatively on datasets containing dynamic scenes published with [15] and the benchmark [22]. Our implementation is done in C++ using OpenCV and CUDA.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluate the performance of our method qualitatively and quantitatively on datasets containing dynamic scenes published with [15] and the benchmark [22]. Our implementation is done in C++ using OpenCV and CUDA.…”
Section: Methodsmentioning
confidence: 99%
“…We detect objects through instance segmentation of the images and subsequently perform tracking and mapping of the static background and the objects. In previous approaches [15,16,27], data association of measurements to objects is either solved through image-based instance seg-mentation or by raycasting in the maps. We propose to determine the unknown association of pixels to objects in a probabilistic expectation maximization (EM [3]) formulation which estimates the soft association likelihood from the likelihood of the measurements in our map representation.…”
Section: Introductionmentioning
confidence: 99%
“…Corresponding model surfels inside the segments are then marked as unstable. In contrast, Rünz et al [12] explicitly track moving objects given by a segmentation process using either motion or semantic cues provided Overview of our approach. Given data from the RGB-D sensor, we first perform an initial pose estimation.…”
Section: Related Workmentioning
confidence: 99%
“…Combined with SLAM systems, 2D semantic segmentation can be achieved in 3D environments [RA17] [TTLN17] [ZSS17] [MHDL17], a promising future in robotic vision understanding and autonomous driving. Unlike these existing methods that aimed at providing the semantic understanding of the scene for robots, we are focusing our attention on human interactions.…”
Section: Previous Workmentioning
confidence: 99%