2019
DOI: 10.3390/app9102105
|View full text |Cite
|
Sign up to set email alerts
|

A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion

Abstract: The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(44 citation statements)
references
References 29 publications
(41 reference statements)
0
44
0
Order By: Relevance
“…Maybe the most tight fusion currently available was proposed in [78], where a graph optimization was performed, using a specific cost function considering both laser and feature constraints. Here, both the laser data and image data could obtain the robot pose estimation.…”
Section: Concurrent Lidar-visual Slammentioning
confidence: 99%
“…Maybe the most tight fusion currently available was proposed in [78], where a graph optimization was performed, using a specific cost function considering both laser and feature constraints. Here, both the laser data and image data could obtain the robot pose estimation.…”
Section: Concurrent Lidar-visual Slammentioning
confidence: 99%
“…Stereo visual odometry is used to track points and lines, and Gauss Newton optimization is then employed to estimate the motion of the camera by minimizing the projection errors of the corresponding features. In [59], observations of point features are combined with laser scans and used in a factor graph to estimate the pose of the robot. A new map representation combining both an occupancy grid map and point features was proposed.…”
Section: Multiple Feature Types To Aid Robustnessmentioning
confidence: 99%
“…The information necessary for autonomous navigation include the following systems: the navigation system, responsible for performing the movements; the control, for performing actions and corrections related to its navigation; and the sensory system, whose purpose is to analyze the internal states of the system and the environment, according to the orientation and speed. Thus, such systems recognize structures and create maps, which are obtained by comparing interval data with the estimation of the position in which it is located [32,33]. The technique to build a 2D map of the environment and estimate the position of a mobile robot simultaneously and interactively is called Simultaneous Localization and Mapping (SLAM).…”
Section: Slam Techniquementioning
confidence: 99%