2021
DOI: 10.3390/electronics10212608
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment

Abstract: In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous dri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…A detailed comparison of the proposed method's performance with state-of-the-art methods is mentioned in Table 5. By Seeing 5 its is clearly evident that the classification performance of the proposed method is better than the method discussed in [17], [21], [12] [15], [23], [20], and [13] while the performance is comparable with the methods discussed in [22] and [28].…”
Section: Comparison With Existing Methods On Livdet 2015 Databasementioning
confidence: 96%
“…A detailed comparison of the proposed method's performance with state-of-the-art methods is mentioned in Table 5. By Seeing 5 its is clearly evident that the classification performance of the proposed method is better than the method discussed in [17], [21], [12] [15], [23], [20], and [13] while the performance is comparable with the methods discussed in [22] and [28].…”
Section: Comparison With Existing Methods On Livdet 2015 Databasementioning
confidence: 96%
“…Experimental results indicate that the proposed affordable, compact, and robust fusion system outperforms benchmark models and can be efficiently used in real-time for the vehicle's environment perception [58] CNN-based real-time semantic segmentation of 3D lidar data for autonomous vehicle perception based on the projection method and the adaptive break point detector method Practical implementation and satisfactory speed and accuracy of the proposed method [59] E2E self-driving algorithm using a CNN that predicts the vehicles' longitudinal and lateral control values based on the input camera images and 2D lidar point cloud data…”
Section: Reference Description Of Application Conclusionmentioning
confidence: 97%
“…Meanwhile, Ref. [ 13 ] proposed predicting longitudinal and lateral control values using LIDAR and camera fusion, relying on CNN architectures based on Inception [ 14 ] and ResNet [ 15 ].…”
Section: Related Workmentioning
confidence: 99%