2022
DOI: 10.3390/rs14236033
|View full text |Cite
|
Sign up to set email alerts
|

SLAM Overview: From Single Sensor to Heterogeneous Fusion

Abstract: After decades of development, LIDAR and visual SLAM technology has relatively matured and been widely used in the military and civil fields. SLAM technology enables the mobile robot to have the abilities of autonomous positioning and mapping, which allows the robot to move in indoor and outdoor scenes where GPS signals are scarce. However, SLAM technology relying only on a single sensor has its limitations. For example, LIDAR SLAM is not suitable for scenes with highly dynamic or sparse features, and visual SL… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(17 citation statements)
references
References 209 publications
0
17
0
Order By: Relevance
“…Three-dimensional object detection refers to the task of detecting and recognizing objects in a 3D scene and estimating their relevant parameters. This information is crucial for applications such as autonomous driving and intelligent robots [21,22]. In Section 2.1, we will discuss the 3D object detection methods using camera, LiDAR, and radar separately.…”
Section: Related Workmentioning
confidence: 99%
“…Three-dimensional object detection refers to the task of detecting and recognizing objects in a 3D scene and estimating their relevant parameters. This information is crucial for applications such as autonomous driving and intelligent robots [21,22]. In Section 2.1, we will discuss the 3D object detection methods using camera, LiDAR, and radar separately.…”
Section: Related Workmentioning
confidence: 99%
“…SLAM is a technique used by mobile robots to determine their location in an unknown environment while simultaneously constructing an accurate map of the environment. The localization component estimates the robot’s position on an existing map, while the mapping component constructs the environment’s map [ 10 , 11 , 12 ]. The two components are interdependent in completing the map building process and enhancing accuracy through continuous iteration.…”
Section: Introductionmentioning
confidence: 99%
“…The optimization algorithm is iterated to minimize the error of the edges to find the optimal solution [13]. the robot's position on an existing map, while the mapping component constructs the environment's map [10][11][12]. The two components are interdependent in completing the map building process and enhancing accuracy through continuous iteration.…”
Section: Introductionmentioning
confidence: 99%
“…The fusion of radar and infrared cameras can greatly enhance the completeness of vehicle perception information, including position, motion status and size (Zhangu et al, 2021;Dalirani et al, 2023). For indoor environments within buildings, early fusion of laser LiDAR with visual sensors primarily involved visible light color cameras and depth cameras (Chen et al, 2022;Wang et al, 2022). Visible light color cameras and depth cameras have high resolutions, providing detailed information about the color and texture of the surrounding environment.…”
Section: Introductionmentioning
confidence: 99%