2019 Third World Conference on Smart Trends in Systems Security and Sustainablity (WorldS4) 2019
DOI: 10.1109/worlds4.2019.8904020
|View full text |Cite
|
Sign up to set email alerts
|

An Integrated Framework for Autonomous Driving: Object Detection, Lane Detection, and Free Space Detection

Abstract: In this paper, we present a deep neural network based real-time integrated framework to detect objects, lane markings, and drivable space using a monocular camera for advanced driver assistance systems. The object detection framework detects and tracks objects on the road such as cars, trucks, pedestrians, bicycles, motorcycles, and traffic signs. The lane detection framework identifies the different lane markings on the road and also distinguishes between the ego lane and adjacent lane boundaries. The free sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Meanwhile, the Embedding Decoder and Binary Decoder architecture are similar, except for the number of output dimensions. The suggested lane detection system in [77] is based on the Drive Works LaneNet pipeline, which uses camera images. This paper presents an integrated framework for autonomous driving based on the NVidia deep neural network multi-class object identification framework, the lane detection framework, and the free space detection framework.…”
Section: I) Conventional Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Meanwhile, the Embedding Decoder and Binary Decoder architecture are similar, except for the number of output dimensions. The suggested lane detection system in [77] is based on the Drive Works LaneNet pipeline, which uses camera images. This paper presents an integrated framework for autonomous driving based on the NVidia deep neural network multi-class object identification framework, the lane detection framework, and the free space detection framework.…”
Section: I) Conventional Deep Learningmentioning
confidence: 99%
“…The camera's position should be fixed and usually expected to be in the vehicle's center. Next, a Toyota Prius autonomous driving research prototype vehicle with Nvidia Drive PX 2 and a Sekonix GMSL Camera was used by Kemsaram and Das [77]. In a car, A GMSL connector connects a Sekonix GMSL Camera to a Drive PX 2.…”
Section: ) Cameramentioning
confidence: 99%
“…Except for object detection, many autonomous driving perception tasks can be formulated as semantic segmentation. For example, free-space detection [35,57,107] is a basic module for many autonomous driving systems which classify the ground pixel into the drivable and non-drivable parts. Some Lane Detection [24,84] methods also use the multi-class semantic segmentation mask to represent the different lanes on the road.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…Labeling the images and then training the features for that class within the tags in the image is essential for model training. For example, there are studies such as following the ball or players in sports competitions [30] and lane tracking in unmanned vehicles [31]. In this study, SSD ResNet50 model, which is one of the mobile object detection models, was used.…”
Section: Object Detectionmentioning
confidence: 99%