2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8917192
|View full text |Cite
|
Sign up to set email alerts
|

Automated Lane Change Decision Making using Deep Reinforcement Learning in Dynamic and Uncertain Highway Environment

Abstract: Autonomous lane changing is a critical feature for advanced autonomous driving systems, that involves several challenges such as uncertainty in other driver's behaviors and the trade-off between safety and agility. In this work, we develop a novel simulation environment that emulates these challenges and train a deep reinforcement learning agent that yields consistent performance in a variety of dynamic and uncertain traffic scenarios. Results show that the proposed data-driven approach performs significantly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 93 publications
(62 citation statements)
references
References 21 publications
0
62
0
Order By: Relevance
“…9). The same approach can be seen in [55], though extending the number of traced objects to nine. These researches lack lateral information, though, in [54], the lateral positions and speeds are also involved in the input vector resulting in a 6x(dx, dy, dvx, dvy) structure, logically representing longitudinal and lateral distance, and speed differences to the ego, respectively.…”
Section: E Observation Spacementioning
confidence: 97%
See 3 more Smart Citations
“…9). The same approach can be seen in [55], though extending the number of traced objects to nine. These researches lack lateral information, though, in [54], the lateral positions and speeds are also involved in the input vector resulting in a 6x(dx, dy, dvx, dvy) structure, logically representing longitudinal and lateral distance, and speed differences to the ego, respectively.…”
Section: E Observation Spacementioning
confidence: 97%
“…There are lighter approaches, where the episode terminates with failure before the accident occurred, with examples of having a too high tangent angle to the track or reaching too close to other participants. These "before accident" terminations speed up the training by bringing the information of failure forward in time, though their design needs caution [55].…”
Section: Rewardingmentioning
confidence: 99%
See 2 more Smart Citations
“…Bouton et al used Reinforcement Learning (RL) with linear temporal logic to determine the driving policy at unsignalized intersections [46]. In addition, RL was proposed to construct a lane change decision algorithm [47,48].…”
Section: Introductionmentioning
confidence: 99%