2019
DOI: 10.48550/arxiv.1909.11538
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automated Lane Change Decision Making using Deep Reinforcement Learning in Dynamic and Uncertain Highway Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…In lateral direction, we penalize the lateral distance to the target lane center. In longitudinal direction, we predict constant velocity trajectories for all relevant 1 surrounding vehicles, so that we can maintain safe distance between them and the RL agent. This is done by calculating the time-to-collision and time-headway values for the planned trajectory, relative to the relevant surrounding vehicles.…”
Section: B Trajectory Planningmentioning
confidence: 99%
See 1 more Smart Citation
“…In lateral direction, we penalize the lateral distance to the target lane center. In longitudinal direction, we predict constant velocity trajectories for all relevant 1 surrounding vehicles, so that we can maintain safe distance between them and the RL agent. This is done by calculating the time-to-collision and time-headway values for the planned trajectory, relative to the relevant surrounding vehicles.…”
Section: B Trajectory Planningmentioning
confidence: 99%
“…In the discrete actions case (e.g. [1,6,31] and also [13,7,8,9,10]), the agent can choose from actions such as keep lane, lanechange to the left or lane-change to the right or fixed accelerating/decelerating steps. While the small and fixed action set leads to fast learning progress, the lane-change maneuvers are usually with fixed execution duration, resulting in a suboptimal, unnatural behavior in tight situations.…”
Section: Introductionmentioning
confidence: 99%