2022
DOI: 10.1109/access.2022.3151133
|View full text |Cite
|
Sign up to set email alerts
|

G2P-SLAM: Generalized RGB-D SLAM Framework for Mobile Robots in Low-Dynamic Environments

Abstract: In this paper, we propose a generalized grouping and pruning method for RGB-D SLAM in low-dynamic environments. The conventional grouping and pruning methods successfully reject the effect of dynamic objects in pose graph optimization (PGO). However, these methods sometimes fail when high-dynamic objects are dominant in the images captured by RGB-D sensors. Furthermore, once it is determined whether the features from dynamic objects are included in some nodes, the corresponding nodes are entirely removed even … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…One of the solutions to this problem is to employ a wheel encoder. G2P-SLAM [19] rejected loop closure matching results with a high Mahalanobis distance from the estimated pose by the wheel odometry, which is invariant to the effect of dynamic and temporarily static objects. Despite the advantages of wheel encoder, these methods are highly dependent on the wheel encoder, limiting their own applicability.…”
Section: B Dynamic Objects Rejection In Visual and VI Slammentioning
confidence: 99%
“…One of the solutions to this problem is to employ a wheel encoder. G2P-SLAM [19] rejected loop closure matching results with a high Mahalanobis distance from the estimated pose by the wheel odometry, which is invariant to the effect of dynamic and temporarily static objects. Despite the advantages of wheel encoder, these methods are highly dependent on the wheel encoder, limiting their own applicability.…”
Section: B Dynamic Objects Rejection In Visual and VI Slammentioning
confidence: 99%
“…A construction site continuously changes over time and on a large scale, and AI must be cognizant of up-to-date spatial information. Recently, simultaneous localization and mapping (SLAM) based on cameras [5][6][7] or light detection and ranging (LiDAR) systems [8,9] have been introduced to help robots localize themselves and recognize spaces even in unseen or unknown environments. When these localization and mapping Fig.…”
Section: Preliminariesmentioning
confidence: 99%
“…Of course, SLAM can be performed by using various sensors, e.g. RGB or RGB-D cameras [8], radar sensors, ultrasonic sensors. Some researchers have employed a 3D light detection and ranging (LiDAR) sensor to achieve a precise 3D point cloud map [9], [10].…”
Section: Introductionmentioning
confidence: 99%