2018
DOI: 10.3390/s18082730
|View full text |Cite
|
Sign up to set email alerts
|

Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

Abstract: Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 98 publications
(68 citation statements)
references
References 36 publications
0
66
0
2
Order By: Relevance
“…A 360°camera captures dual images or video files from dual lenses with 180°field of view and either performs an on-camera automatic stitch of the images/video or lets the user perform off-board stitching of the images, to give a full 360°view of the world [28,[116][117][118].…”
Section: °Cameramentioning
confidence: 99%
See 1 more Smart Citation
“…A 360°camera captures dual images or video files from dual lenses with 180°field of view and either performs an on-camera automatic stitch of the images/video or lets the user perform off-board stitching of the images, to give a full 360°view of the world [28,[116][117][118].…”
Section: °Cameramentioning
confidence: 99%
“…The fusion of raw data in the first layer, a fusion of features in the second, and finally the decision layer fusion. In the case of the LiDAR and camera data fusion, two distinct steps effectively integrate/fuse the data [28,117,125]:…”
Section: Implementation Of Data Fusion With the Given Hardwarementioning
confidence: 99%
“…After the pre-processing work is completed, the sparse depth map is transformed into a dense depth map through the depth completion framework so that the resolution of LiDAR data and image is the same. The depth-completion method can be divided into two types, guided depth completion [22,[26][27][28][29] and non-guided depth completion [21,30].…”
Section: Figure 2 the Coordinate Conversion Between The Image And Thmentioning
confidence: 99%
“…In automatic driving scene, vehicles need to obtain not only their own accurate position information but also position information of other surrounding vehicles to furtherly assure safety [9]. In response to this, autonomous vehicles should be equipped with laser radar, millimeter-wave radar, binocular camera, and other sensing devices to detect surroundings, yet such devices are vulnerable to environmental interference, which could result in enormous error [10], [11]. The key to solving this problem is to connect autonomous vehicles to the network and introduce Road Side Unit (RSU) to automatic driving roads [12], [13].…”
Section: Introductionmentioning
confidence: 99%