2018 IEEE Intelligent Vehicles Symposium (IV) 2018
DOI: 10.1109/ivs.2018.8500699
|View full text |Cite
|
Sign up to set email alerts
|

Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(42 citation statements)
references
References 13 publications
0
42
0
Order By: Relevance
“…Therefore, when the detection results of the single-object detection systems are different from each other, the performance can be improved through the proposed MYs-WM by reinforcement through result fusion. The detection result of the car through MYs-WM was improved to 89.83% (IOU = 0.7), and that of the pedestrian to 79.25% (IOU = 0.5), which is higher than that of the Faster R-CNN-based convergence system of [1]. Examples of fusion detection results of MYs-WM in comparison with Y-CM are shown in Figure 8 for cars and Figure 9 for pedestrians.…”
Section: Resultsmentioning
confidence: 91%
See 2 more Smart Citations
“…Therefore, when the detection results of the single-object detection systems are different from each other, the performance can be improved through the proposed MYs-WM by reinforcement through result fusion. The detection result of the car through MYs-WM was improved to 89.83% (IOU = 0.7), and that of the pedestrian to 79.25% (IOU = 0.5), which is higher than that of the Faster R-CNN-based convergence system of [1]. Examples of fusion detection results of MYs-WM in comparison with Y-CM are shown in Figure 8 for cars and Figure 9 for pedestrians.…”
Section: Resultsmentioning
confidence: 91%
“…As mentioned earlier, the proposed system aims to enhance the performance of object detection by fusing all object detection results from Y-CM, Y-DM, and Y-RM through a weighted mean. Performance comparisons with the single-object detection systems (Y-CM, Y-DM, Y-RM) and [1], where Faster R-CNN is applied to VGG-16 [26] structure based on a RGB camera and a LiDAR, were conducted with IOUs of 0.3, 0.5, and 0.7, and the evaluation results are summarized in Table 1 for cars and Table 2 for pedestrians. The results show that Y-CM had the highest detection performance of 87.12% (IOU = 0.7) for cars and 76.62% (IOU = 0.5) for pedestrians among the single-object detection systems, while Y-DM and Y-RM each showed about 16% lower detection performance.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Ishikawa et al [7] estimated the motion of LiDAR and camera separately and determined the intrinsics and extrinsics through motion-tomotion correspondences. Banerjee et al [1] detected the edges of objects in camera frames and calibrated through edge-to-edge correspondences. These approach s not sensitive to the quality of calibration equipment but relies on the segmentation of LiDAR and camera frames.…”
Section: A Lidar-camera Correspondence Collectionmentioning
confidence: 99%
“…The first one is the extrinsic transformation that is the projection model from LiDAR to camera coordinate. This 6-DOF matrix can be expressed as (1).…”
Section: A Calibration Modelsmentioning
confidence: 99%