2022
DOI: 10.3390/app121910145
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Vehicle Platooning at a Signalized Intersection in Mixed Traffic with Partial Detection

Abstract: The intersection management system can increase traffic capacity, vehicle safety, and the smoothness of all vehicle movement. Platoons of connected vehicles (CVs) use communication technologies to share information with each other and with infrastructures. In this paper, we proposed a deep reinforcement learning (DRL) model that applies to vehicle platooning at an isolated signalized intersection with partial detection. Moreover, we identified hyperparameters and tested the system with different numbers of veh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 41 publications
0
0
0
Order By: Relevance
“…However, we used a modern model to optimise the network, so our results are better. Compared with the model (DQN) in [20][21], our model has the advantage of being able to simulate continuous actions. This is very important in traffic simulation because it is possible to describe the actions of agents accurately and fully.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we used a modern model to optimise the network, so our results are better. Compared with the model (DQN) in [20][21], our model has the advantage of being able to simulate continuous actions. This is very important in traffic simulation because it is possible to describe the actions of agents accurately and fully.…”
Section: Discussionmentioning
confidence: 99%
“…Deep Q-learning is a combination of two algorithms, DNN and Q-learning. These models have been used in many studies to optimise signal lights and minimise waiting time [21,22]. These models are only applied to a single intersection without considering the influence of adjacent intersections.…”
Section: Traffic Signal Control For Road Networkmentioning
confidence: 99%
“…DRL-based approaches have recently gained significant attention in vehicular communication systems due to their ability to learn optimal control policies in complex and dynamic environments [9] and to enhance automation of aforementioned techniques. DRL has been applied to various aspects of vehicular networks, such as path planning [10], platoon formation [11], and cooperative driving [12]. In the context of beamforming, DRL-based approaches have been proposed to optimize communication links in V2V and V2X scenarios [13].…”
Section: Related Workmentioning
confidence: 99%
“…where (a i , δ i ) are the new inputs of vehicle i with |δ i | ≤ π/4, F cj,i , j ∈ {f, r}, is as described in ( 5), ( 6), and v x,i > 0 such that ξ i ̸ = 0. Therefore, any control outputs of the form longitudinal acceleration and steering angle can be applied directly into the single-track model through the dynamic inversion (11). Using the new states ( 7), (8), and the transformed input (11), the error derivation is discussed in the next section.…”
Section: Dynamic Input Inversionmentioning
confidence: 99%
“…The rapid development of artificial intelligence technology, especially Reinforcement Learning (RL), has brought benefits to autonomous driving [5], [11], [12]. The application of artificial neural networks for tracking and autonomous vehicles can be traced back to 1990s, see, e.g., [13], [14].…”
Section: Introductionmentioning
confidence: 99%