2024
DOI: 10.1016/j.autcon.2024.105344
|View full text |Cite
|
Sign up to set email alerts
|

Review of simultaneous localization and mapping (SLAM) for construction robotics applications

Andrew Yarovoi,
Yong Kwon Cho
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…For example, Kinect Fusion [12] can use the RGB-D Kinect camera to achieve instant 3D reconstruction of the environment through dense point cloud sampling and real-time tracking algorithms. Furthermore, algorithms from the LOAM series can also find applications in the Architecture, Engineering, and Construction (AEC) field [13].…”
Section: Introductionmentioning
confidence: 99%
“…For example, Kinect Fusion [12] can use the RGB-D Kinect camera to achieve instant 3D reconstruction of the environment through dense point cloud sampling and real-time tracking algorithms. Furthermore, algorithms from the LOAM series can also find applications in the Architecture, Engineering, and Construction (AEC) field [13].…”
Section: Introductionmentioning
confidence: 99%
“…This technique, which holds a pivotal position in autonomous driving, augmented reality, and virtual reality, effectively mitigates the accumulation of errors in localization and map construction. By establishing robust constraints between the current frame and historical frames, loop-closure detection significantly enhances the practical utility of SLAM in autonomous navigation [ 2 ] and robotics applications [ 3 ]. Consequently, it facilitates the attainment of more precise and reliable spatial perception and navigation capabilities, thereby playing a pivotal role in ensuring the accuracy and efficiency of SLAM systems.…”
Section: Introductionmentioning
confidence: 99%
“…Navigating in complex environments requires simultaneous localization and mapping (SLAM) approaches based on light detection and ranging (LiDAR) or visual sensors. [3][4][5][6][7][8] Visual SLAM is highly dependent on light conditions and texture but can offer fast mapping using a small and lightweight system or multiple cameras with high resolution and a high field of view. Using semantic interpretation can be used to improve SLAM operation in dynamic environments.…”
Section: Introductionmentioning
confidence: 99%