2019
DOI: 10.1109/lra.2019.2928261
|View full text |Cite
|
Sign up to set email alerts
|

A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions

Abstract: Fusing data from LiDAR and camera is conceptually attractive because of their complementary properties. For instance, camera images are higher resolution and have colors, while LiDAR data provide more accurate range measurements and have a wider Field Of View (FOV). However, the sensor fusion problem remains challenging since it is difficult to find reliable correlations between data of very different characteristics (geometry vs. texture, sparse vs. dense). This paper proposes an offline LiDAR-camera fusion m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 61 publications
(32 citation statements)
references
References 25 publications
0
32
0
Order By: Relevance
“…This requires upsampling of the sparse and irregular data by depth completion. Here, a method based on LiDAR-camera fusion turns out to be extremely useful as it produces high-resolution depth images [177].…”
Section: ) Radar Signal Processingmentioning
confidence: 99%
“…This requires upsampling of the sparse and irregular data by depth completion. Here, a method based on LiDAR-camera fusion turns out to be extremely useful as it produces high-resolution depth images [177].…”
Section: ) Radar Signal Processingmentioning
confidence: 99%
“…Hand-eye calibration can provide initial extrinsic parameters, but it depends heavily on visual odometry and LiDAR odometry accuracy [39]. A series of other studies [40][41][42] combined LiDAR-camera calibration with sensor fusion localization and mapping to establish a joint optimization function. The odometry and extrinsic parameters are optimized simultaneously for stable mapping.…”
Section: Targetless Approachmentioning
confidence: 99%
“…However, recognising a specific person from a slice of a 2D point cloud is hopeless. For this reason, moving along a direction frequently taken in robotics [22], [23], [24], we apply a combination of cameras and LIDARS. The use of separate systems for depth estimation and classification improves the robustness of the tracking system when one of the sensors fails; e.g., if the leader falls outside the camera field of view, we can still use the LIDAR sensor for some time assuming a certain degree of reliability of the specific-person following.…”
Section: Related Workmentioning
confidence: 99%