2018 IEEE Intelligent Vehicles Symposium (IV) 2018
DOI: 10.1109/ivs.2018.8500718
|View full text |Cite
|
Sign up to set email alerts
|

Overtaking Maneuvers in Simulated Highway Driving using Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(42 citation statements)
references
References 8 publications
0
42
0
Order By: Relevance
“…• Curriculum learning describes a type of learning in which the training starts with only easy examples of a task and then gradually increase difficulty. This approach is used in [18]- [20]. • Adversarial learning aims to fool models through malicious input.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…• Curriculum learning describes a type of learning in which the training starts with only easy examples of a task and then gradually increase difficulty. This approach is used in [18]- [20]. • Adversarial learning aims to fool models through malicious input.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…The surrounding from the perspective of the vehicle can be described by a coarse perception map where the target is represented by a red dot (c) (source: [78]). since there is no reflection, which is provided by TORCS and used in [20], is to represent the lane markings with imagined beam sensors. The agent in the cited example uses readings from 19 sensors with a 200m range, presenting at every 10 • on the front half of the car returning distance to the track edge.…”
Section: E Observation Spacementioning
confidence: 99%
“…The RL algorithm was based on DDPG and applied to the navigation task. Kaushik, et al [14] also used DDPG to learn overtaking maneuvers in continuous action space in a way of curriculum learning that made the agent first learn simple tasks (lane keeping) and then moved on to complex tasks (overtaking), with the goal of fast overtaking car in front of the RL vehicle. Sallab, et al [15] compared the effect of using discrete action space and continuous action space for the lane keeping task, with the use of deep Qnetwork and Deep Deterministic Actor Critic (DDAC) respectively, and concluded that both methods could achieve successful lane keeping behavior but DDAC showed better performance with smoothed actions.…”
Section: Related Workmentioning
confidence: 99%
“…Prior works on reinforcement learning for autonomous driving that used fully-connected network architectures and fixed sized inputs [6], [7], [5], [8], [9] are limited in the number of vehicles that can be considered. CNNs using occupancy grids [10], [11] are limited to their initial grid size.…”
Section: Introductionmentioning
confidence: 99%