Eleventh International Conference on Digital Image Processing (ICDIP 2019) 2019
DOI: 10.1117/12.2539863
|View full text |Cite
|
Sign up to set email alerts
|

Velodyne LiDAR and monocular camera data fusion for depth map and 3D reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 6 publications
0
9
0
Order By: Relevance
“…Depth cameras provide rich depth information, but their field of view is quite narrow. Conversely, LiDARs contain a wider field of view, but they provide sparse rather than rich environment information Akhtar et al (2019). LiDARs provide information in the form of a point cloud, whereas depth cameras provide luminance.…”
Section: Perception Sensingmentioning
confidence: 99%
“…Depth cameras provide rich depth information, but their field of view is quite narrow. Conversely, LiDARs contain a wider field of view, but they provide sparse rather than rich environment information Akhtar et al (2019). LiDARs provide information in the form of a point cloud, whereas depth cameras provide luminance.…”
Section: Perception Sensingmentioning
confidence: 99%
“…As recent as 2019, Akhtar et al [254] developed a data fusion system that was used to create a 3D Model with a depth map and object 3D reconstruction. Jin et al [255] proposed an approach for SLAM using 2D LiDAR and stereo camera with loop closures to estimate odometry.…”
Section: Mappingmentioning
confidence: 99%
“…The projection of transformed points onto a 2D raster image is often used in multimodal image systems for data fusion resp. registration [ 9 , 18 , 19 ] ( Figure 3 , left). The challenges that arise in image data fusion [ 20 ] are: non-commensurability, different resolutions (challenge (B)), number of dimensions, noise, missing data, and conflicting, contradicting, or inconsistent data (challenge (D)).…”
Section: Introductionmentioning
confidence: 99%
“…Sensors such as Orbbec3D Astra Pro or Azure Kinect provide color point clouds at 30 fps, which is of particular interest for human–robot collaboration [ 22 , 23 ]. The fusion of camera data and modern 3D Light Detection and Ranging (LiDAR) data has many applications, including autonomous and safe control of mobile robots [ 18 ], object detection [ 8 ], and simultaneous localization and mapping (SLAM) [ 24 ]. Depth sensors are also widely used in scene analysis and human–computer interaction, e.g., object detection [ 8 , 17 , 25 ], (semantic) segmentation, [ 15 , 16 , 17 ] or markerless motion capture applications [ 26 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation