2021
DOI: 10.3390/rs13183651
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR-Based SLAM under Semantic Constraints in Dynamic Environments

Abstract: Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference of dynamic elements in the environment. This problem is effectively solved in the present study by proposing a SLAM approach based on light detection and ranging (Li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 47 publications
0
5
0
Order By: Relevance
“…The network is able to operate even faster than the LiDAR frequency. Wang et al [39] propose a 3D neural network SANet and add it to LOAM for semantic segmentation to segment dynamic objects. Jeong et al [40] proposed a 2D lidar odometry and mapping based on CNN which used the fault detection of scan matching in dynamic environments.…”
Section: Dynamic Points Removal Approaches In Slammentioning
confidence: 99%
“…The network is able to operate even faster than the LiDAR frequency. Wang et al [39] propose a 3D neural network SANet and add it to LOAM for semantic segmentation to segment dynamic objects. Jeong et al [40] proposed a 2D lidar odometry and mapping based on CNN which used the fault detection of scan matching in dynamic environments.…”
Section: Dynamic Points Removal Approaches In Slammentioning
confidence: 99%
“…Learning-based visual SLAM methods use deep neural networks to estimate depth information from monocular Amiri et al (2019) or stereo camera inputs Li et al (2019). On the other hand, learning-based LiDAR SLAM methods used to classify and segment the environment into different objects Xiao et al (2019); Wang et al (2021); Yu and Peng (2020); Langer et al (2020), making it easier to build a map. Deep learning has also been utilized to address challenges such as handling dynamic objects, improving real-time performance, and dealing with large-scale environments.…”
Section: Simultaneous Localization and Mapping (Slam)mentioning
confidence: 99%
“…With the rapid development of AI and robotics, such abilities need to be transformed from processing two-dimensional space-time, static past time, and abstract and abbreviated symbolic expression to processing three-dimensional space-time, dynamic present time, and fine and rich three-dimensional reproduction of realistic scenes [2]. Furthermore, the revolution of deep learning for robotic vision and the availability of large-scale benchmark datasets have advanced research on the key capabilities of environmental perception, such as semantic segmentation [3], instance segmentation [4], object detection and multi-object tracking [5]. However, most research focuses on category/object-wise improvement for the individual tasks (e.g., reasoning of a single category in semantic segmentation, recognition of an individual object in instance segmentation), which falls short of the practical need to provide a holistic environment understanding for intelligent robots.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the small number of LiDAR point cloud datasets with accurate annotation information has constrained otherwise flourishing research on point cloud panoptic segmentation in outdoor scenes. Since LiDAR is less susceptible than vision sensors to light and weather conditions, it is now extensively used in environmental perception applications, such as robotic mapping, autonomous driving, 3D reconstruction, and other areas [3]. To pave way for research on LiDAR-based scene understanding, Behley et al introduced the SemanticKITTI dataset [7] that provides point-wise annotation of each LiDAR scan in the KITTI dataset.…”
Section: Introductionmentioning
confidence: 99%