2021
DOI: 10.3390/rs13163340
|View full text |Cite
|
Sign up to set email alerts
|

DV-LOAM: Direct Visual LiDAR Odometry and Mapping

Abstract: Self-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a two-staged direct visual odometry module, which consists of a frame-to-frame tracking step, and an improved sliding window based thinning step, is proposed to estimate the accurate pose of the camera while maintaining efficiency.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(21 citation statements)
references
References 69 publications
0
14
0
Order By: Relevance
“…Main applications of RiWNet are dynamic Visual SLAM or visual-LiDAR fusion odometry/SLAM as well as 3D dense mapping. Here we show the effectiveness of RiWNet, adding RiWNet as a processing module to segment moving objects for keyframes in our previous work called DV-LOAM [62] of visual-LiDAR fusion SLAM. In DV-LOAM, because the relative transformation of the camera and laser is known, just using the image of the moving object segmentation result can handle the point cloud of the entire Visual LiDAR fusion SLAM.…”
Section: G Applicationsmentioning
confidence: 94%
“…Main applications of RiWNet are dynamic Visual SLAM or visual-LiDAR fusion odometry/SLAM as well as 3D dense mapping. Here we show the effectiveness of RiWNet, adding RiWNet as a processing module to segment moving objects for keyframes in our previous work called DV-LOAM [62] of visual-LiDAR fusion SLAM. In DV-LOAM, because the relative transformation of the camera and laser is known, just using the image of the moving object segmentation result can handle the point cloud of the entire Visual LiDAR fusion SLAM.…”
Section: G Applicationsmentioning
confidence: 94%
“…In the context of odometry, one can classify how data is used to produce the system's output. For example, LiDAR measurements can complement the imagery, while estimating ego-motion with visual odometry or vice versa; or the two types of odometry can operate separately and fuse at a higher abstraction level of the system's framework [61]. The systems that follow the first approach are usually denoted as tightly-coupled, whereas the others are called loosely-coupled [62].…”
Section: A Data Fusion Strategiesmentioning
confidence: 99%
“…The work presented by W. Wang et al [61], DV-LOAM, is an example of how to combine LiDAR and camera data at various levels for improved ego-motion detection. DV-LOAM is composed of a front-end and a back-end part.…”
Section: B State-of-the-art Techniquesmentioning
confidence: 99%
“…It makes up for the lack of loopback detection in DSO and uses the feature-based bag of words [7] to reliably detect loopbacks, reducing the cumulative error of the system. To reduce the computational complexity while obtaining accurate results, a typical SLAM solution called LOAM (Lidar Odometry and Mapping in Real-time) and its variants are proposed to achieve low-drift and real-time pose and mapping estimation by performing point-to-line and point-to-plane matching [8][9][10][11][12]. DV-LOAM [12] is a direct vision LiDAR fusion SLAM framework.…”
Section: Introductionmentioning
confidence: 99%