2020 American Control Conference (ACC) 2020
DOI: 10.23919/acc45564.2020.9148037
|View full text |Cite
|
Sign up to set email alerts
|

LIV-LAM: LiDAR and Visual Localization and Mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…LIMO [27] levers the power of deep learning to remove features on dynamic objects. LIV-LAM [78] proposes unsupervised learning for object discovery and uses detected features of the objects as landmark features.…”
Section: Loosely-coupled Methodsmentioning
confidence: 99%
“…LIMO [27] levers the power of deep learning to remove features on dynamic objects. LIV-LAM [78] proposes unsupervised learning for object discovery and uses detected features of the objects as landmark features.…”
Section: Loosely-coupled Methodsmentioning
confidence: 99%
“…In addition to the current dominant multi-sensor fusion method based on the ORB-SLAM framework, there are also many excellent fusion methods worthy of reference and research. R. Radmanesh et al [193] proposed a monocular SLAM method based on light detection and LIDAR ranging to provide depth information, which uses camera data to process unknown objects in an unsupervised way, as well as visually detected features as landmark features, and fuses them with LIDAR sensor data [194]. The proposed method is superior to the current maps generated only by LIDAR in terms of computational efficiency and accuracy.…”
Section: (B) Other Fusion Optionsmentioning
confidence: 99%
“…If a loop is detected, it is added as an edge to the pose graph. LIV-LAM [7] integrates LiDAR-based odometry measurements with target detection based on a monocular camera and associates it with loop closure detection through pose graph optimization. LC-LVF [8] suggests a new error function that takes both scanning and image data as constraints for pose graph optimization and uses g2o for further optimization, ultimately employing a Bag-of-Words-based method for revisited place detection.…”
Section: Introductionmentioning
confidence: 99%