Procedings of the British Machine Vision Conference 2017 2017
DOI: 10.5244/c.31.11
|View full text |Cite
|
Sign up to set email alerts
|

Virtual to Real Reinforcement Learning for Autonomous Driving

Abstract: Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic vir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
91
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 186 publications
(91 citation statements)
references
References 19 publications
0
91
0
Order By: Relevance
“…The agent then processes these inputs through an RNN to provide a heading, speed, and waypoint, which are then achieved through a low-level controller. This had the advantage that pre-processed inputs could be obtained either from simulation or real-world data, which makes transferring driving policies from simulation to the real world easier [143], [144]. Furthermore, synthesising perturbations to model recoveries from incorrect lane positions or even scenarios such as collisions or driving off-road provides the model with robustness to errors and allows the model to learn to avoid such scenarios.…”
Section: Simultaneous Lateral and Longitudinal Control Systemsmentioning
confidence: 99%
“…The agent then processes these inputs through an RNN to provide a heading, speed, and waypoint, which are then achieved through a low-level controller. This had the advantage that pre-processed inputs could be obtained either from simulation or real-world data, which makes transferring driving policies from simulation to the real world easier [143], [144]. Furthermore, synthesising perturbations to model recoveries from incorrect lane positions or even scenarios such as collisions or driving off-road provides the model with robustness to errors and allows the model to learn to avoid such scenarios.…”
Section: Simultaneous Lateral and Longitudinal Control Systemsmentioning
confidence: 99%
“…Pan et al . [PYWL17] use a novel realistic translation network to train an autonomous driving model in a virtual environment and then use it in the real‐world environment. In this virtual‐to‐real reinforcement learning framework, the images from virtual environment are segmented to scene‐parsing representations first and then are translated to synthetic images.…”
Section: Applications In Autonomous Drivingmentioning
confidence: 99%
“…Conversely, discretization is avoided in [9], by introducing a continuous non-linear optimization method where obstacles in the form of polygons are converted to quadratic constraints. c) Learned Motion Planning: Learning approaches to motion planning have mainly been studied from an imitation learning (IL) [10], [11], [12], [13], [14], [15], [16], or reinforcement learning (RL) [17], [18], [19], [20] perspective. While most IL approaches provide an end-to-end training framework to control outputs from sensory data, they suffer from compounding errors due to the sequential decision making process of self-driving.…”
Section: Related Workmentioning
confidence: 99%