Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing 2018
DOI: 10.1145/3293353.3293364
|View full text |Cite
|
Sign up to set email alerts
|

Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Besides, to solve the problem of the unstable learning process, the scholars devised a few network structures that can be used in complex environments. For example, Mehta et al [28] proposed multi-task learning from demonstration (MTLfD) framework that predicts visual affordances and action primitives and guides predictive driving commands through direct supervision. Sauer et al [29] presented a direct perception method that maps video inputs to intermediate representations and is adapted to autonomic guidance in sophisticated urban surroundings to reduce traffic accidents.…”
Section: Related Workmentioning
confidence: 99%
“…Besides, to solve the problem of the unstable learning process, the scholars devised a few network structures that can be used in complex environments. For example, Mehta et al [28] proposed multi-task learning from demonstration (MTLfD) framework that predicts visual affordances and action primitives and guides predictive driving commands through direct supervision. Sauer et al [29] presented a direct perception method that maps video inputs to intermediate representations and is adapted to autonomic guidance in sophisticated urban surroundings to reduce traffic accidents.…”
Section: Related Workmentioning
confidence: 99%
“…Although there are the prototypes of autonomous vehicles currently tested on the regular streets, some of the challenges for the autonomous driving are not completely solved yet. Current challenges in autonomous vehicles development are sensor fusion [38,39,40,41], higher-level planning decisions [42,43,44,45,46], an end-to-end learning for autonomous driving [1,2,3,4,5,47,48,49], reinforcement learning for autonomous driving [5,50,51,52,53], and human machine interaction [54,55]. A systematic comparison of deep learning architectures used for autonomous vehicles is given in [56], a short overview of sensors and sensor fusion in autonomous vehicles is presented in [57].…”
Section: Related Workmentioning
confidence: 99%
“…The known solutions for end-to-end learning for autonomous driving [1,2,3,4,5] are developed mostly for the real vehicles, where the machine learning model used for inference is deployed on the high-performance computer, which is usually located in the trunk of the vehicle, or those solutions use very deep neural networks that are computationally expensive (e.g., using ResNet50 architecture in) [3]. However, our idea was to develop a significantly smaller solution, a light deep neural network, with similar performance during autonomous driving as known solutions, but using a smaller computational cost that will enable deployment on an embedded platform.…”
Section: Introductionmentioning
confidence: 99%
“…The known answers for start to finish learning for self-sufficient driving [4], [5], [14], [9], [30] are grown for the most part for the genuine vehicles, where the AI model utilized for deduction is conveyed on the superior PC, which is typically situated in the storage compartment of the vehicle, or those arrangements utilize profound neural systems that are computationally costly (e.g., utilizing ResNet50 design in) [14].…”
Section: Fig 2 Block Diagram Of Interface Between Ai Sensor and Actmentioning
confidence: 99%
“…To counter this problem, we propose to use the roof surface of the vehicle to generate power from solar energy and store it in Li-ion batteries and then use this power source to run SBC computers with the proposed CNN models. Based on the road types the appropriate CNN can be used instead of using one very complex, computationally expensive network [14].…”
Section: Fig 2 Block Diagram Of Interface Between Ai Sensor and Actmentioning
confidence: 99%