2022
DOI: 10.3390/rs14133010
|View full text |Cite
|
Sign up to set email alerts
|

An Overview on Visual SLAM: From Tradition to Semantic

Abstract: Visual SLAM (VSLAM) has been developing rapidly due to its advantages of low-cost sensors, the easy fusion of other sensors, and richer environmental information. Traditional visionbased SLAM research has made many achievements, but it may fail to achieve wished results in challenging environments. Deep learning has promoted the development of computer vision, and the combination of deep learning and SLAM has attracted more and more attention. Semantic information, as high-level environmental information, can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
58
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 108 publications
(58 citation statements)
references
References 245 publications
0
58
0
Order By: Relevance
“…However, they did not include other vision sensors, such as event camera-based ones, which will be discussed later in Section 4.1 . Chen et al [ 18 ] reviewed a wide range of traditional and semantic VSLAM publications. They divided the SLAM development era into classical , algorithmic-analysis , and robust-perception stages and introduced hot issues there.…”
Section: Related Surveysmentioning
confidence: 99%
See 1 more Smart Citation
“…However, they did not include other vision sensors, such as event camera-based ones, which will be discussed later in Section 4.1 . Chen et al [ 18 ] reviewed a wide range of traditional and semantic VSLAM publications. They divided the SLAM development era into classical , algorithmic-analysis , and robust-perception stages and introduced hot issues there.…”
Section: Related Surveysmentioning
confidence: 99%
“…They collected approaches that have been evaluated on the KITTI dataset, enabling them to have a brief description of the advantages and demerits of each system. Cheng et al [ 18 ] reviewed the VSLAM-based autonomous driving systems and raised the future development trends of such systems in a similar manuscript. Some other researchers surveyed VSLAM works with the ability to work in real-world conditions.…”
Section: Related Surveysmentioning
confidence: 99%
“…From a methodological point of view, localization is achieved by relying on Visual Odometry (VO) or Simultaneous Localization And Mapping (SLAM), as presented in numerous literature works e.g., Yousif et al [ 4 ], Agostinho et al [ 5 ] and Chen et al [ 6 ]. Most of these approaches leverage image features (i.e., keypoints) tracking across multiple frames to estimate the camera ego-motion.…”
Section: Introductionmentioning
confidence: 99%
“…In these scenarios, RGB-D cameras or LiDAR are often used as the primary sensors to capture the scene [ 1 , 2 ]. The Visual SLAM framework is now relatively mature and consists mainly of front-end feature extraction, back-end state estimation, loopback detection, and map building [ 3 ]. Some excellent SLAM algorithms, such as ORB-SLAM2 [ 4 ], HECTOR-SLAM [ 5 ], LSD-SLAM [ 6 ], etc., have been applied in some fields with more excellent results.…”
Section: Introductionmentioning
confidence: 99%