2020
DOI: 10.3390/s20185443
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Automated Lane-Change Maneuvering Considering Driving Style Using a Deep Deterministic Policy Gradient Algorithm

Abstract: Changing lanes while driving requires coordinating the lateral and longitudinal controls of a vehicle, considering its running state and the surrounding environment. Although the existing rule-based automated lane-changing method is simple, it is unsuitable for unpredictable scenarios encountered in practice. Therefore, using a deep deterministic policy gradient (DDPG) algorithm, we propose an end-to-end method for automated lane changing based on lidar data. The distance state information of the lane boundary… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…In [ 32 ], the DDPG algorithm was adopted to optimize torque distribution control for a multiaxle electric vehicle with in-wheel motors. In [ 33 ], an end-to-end automatic lane changing method was proposed for autonomous vehicles using the DDPG algorithm. In [ 34 ], a Proportional–Integral–Derivative (PID)-Guide controller was designed to continuously learn through RL according to the feedback of environment to achieve high-precision attitude control of spacecraft.…”
Section: Related Workmentioning
confidence: 99%
“…In [ 32 ], the DDPG algorithm was adopted to optimize torque distribution control for a multiaxle electric vehicle with in-wheel motors. In [ 33 ], an end-to-end automatic lane changing method was proposed for autonomous vehicles using the DDPG algorithm. In [ 34 ], a Proportional–Integral–Derivative (PID)-Guide controller was designed to continuously learn through RL according to the feedback of environment to achieve high-precision attitude control of spacecraft.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, r o , r c and r end represent the smooth merging reward, congestion reward and terminal reward, respectively. Specifically, r o measures the smoothness of the merging behaviour [27], which can be given by…”
Section: Rewardmentioning
confidence: 99%
“…RL methodologies aim to learn a policy that maximizes the cumulative rewards received by an automatic system, as it interacts with the environment [41], [42]. One variant of RL is Deep RL, which combines DL with RL [43]. Although there are some technical differences between RL and Deep RL, for the purposes of this review, which is to differentiate IL and RL, we will use the terms RL and Deep RL interchangeably.…”
Section: Introductionmentioning
confidence: 99%