2017
DOI: 10.1109/lra.2017.2724759
|View full text |Cite
|
Sign up to set email alerts
|

RGB-D SLAM in Dynamic Environments Using Static Point Weighting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
103
1
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 227 publications
(105 citation statements)
references
References 26 publications
0
103
1
1
Order By: Relevance
“…Motion Segmentation DSLAM [14] Motion Removal DVO-SLAM [16] DynaSLAM (N+G) (RGB-D) SLAM [12] w RGB-D ORB-SLAM2 is initialized and starts the tracking from the very first frame, and hence dynamic objects can introduce errors. ORB-SLAM delays the initialization until there is parallax and consensus using the staticity assumption.…”
Section: Sequence Depth Edgementioning
confidence: 99%
“…Motion Segmentation DSLAM [14] Motion Removal DVO-SLAM [16] DynaSLAM (N+G) (RGB-D) SLAM [12] w RGB-D ORB-SLAM2 is initialized and starts the tracking from the very first frame, and hence dynamic objects can introduce errors. ORB-SLAM delays the initialization until there is parallax and consensus using the staticity assumption.…”
Section: Sequence Depth Edgementioning
confidence: 99%
“…Furthermore, different failure cases are thoroughly demonstrated, throwing some light into possible future work. For future work, it will be promising to improve our current method to be more robust to dynamic environments such as using weighted edge points [47]. Moreover, implementing our current method on a GPU will be more efficient [41] and it will also be interesting to integrate our method for a robot autonomous navigation system [48].…”
Section: Discussionmentioning
confidence: 99%
“…Then a region growing procedure is performed to identify dynamic regions by adapting these seeds. Li et al [22] proposed a RGB-D SLAM method based on depth edge for dynamic environments in 2017. Only depth edge points are used to estimate the camera motion.…”
Section: Related Workmentioning
confidence: 99%