2017
DOI: 10.48550/arxiv.1704.03952
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Virtual to Real Reinforcement Learning for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 85 publications
(72 citation statements)
references
References 13 publications
0
72
0
Order By: Relevance
“…To extend this in the real-world, one option would be to leverage multiple pairs of physical protagonist and adversarial agents, which then update a global network. Alternatively, sim-to-real transfer, an active area of research [46]- [49], could be investigated to better leverage the faster training offered by simulators and minimising the amount of costly real-world training required.…”
Section: Discussionmentioning
confidence: 99%
“…To extend this in the real-world, one option would be to leverage multiple pairs of physical protagonist and adversarial agents, which then update a global network. Alternatively, sim-to-real transfer, an active area of research [46]- [49], could be investigated to better leverage the faster training offered by simulators and minimising the amount of costly real-world training required.…”
Section: Discussionmentioning
confidence: 99%
“…In sequential decision making, the action of the previous step will affect the next step. Many problems can be represented in this form, such as game playing [15]- [17], autonomous driving [18]- [20], robot control [21], [22], recommended system [23] and trading [24]. Inspired by this work, we also adopt the classic architecture in which the policy network guides the MCTS.…”
Section: Related Workmentioning
confidence: 99%
“…Until recently, the choice of an optimal action function was only solvable either in very specific settings where a closed form solution exists or where both the action and the state spaces are finite and rather small, see [WD92]. In the last decade, the combination of ideas from optimal control with the flexibility of deep learning has enabled astonishing progress in the context of self-driving cars, see [Pan+17], games, see [Mni+15] and robotics, see [Lil+15]. First for discrete action spaces, see [Mni+15], and even more recently for continuous and potentially high dimensional action spaces.…”
Section: Differentiable Reinforcement Learning 21 the Reinforcement L...mentioning
confidence: 99%
“…Its combination with deep learning, see [LBH15], called deep reinforcement learning (DRL), see [Fra+18], has recently had a huge success in addressing sequential decision problems. For example, it outperforms the best humans at many games, see [Mni+15], and has state of the art applications in many real life applications such as robotics, see [Lil+15], and self-driving cars, see [Pan+17].…”
Section: Introductionmentioning
confidence: 99%