2022 American Control Conference (ACC) 2022
DOI: 10.23919/acc53348.2022.9867314
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Agent Deep Reinforcement Learning Coordination Framework for Connected and Automated Vehicles at Merging Roadways

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Shi et al [28] proposed a cooperative strategy of longitudinal control for a mixed connected and automated traffic environment based on the DRL algorithm and enhanced performance for an entire mixed traffic flow. Nakka et al [29] proposed a decentralized multi-agent RL (MARL) framework for coordinating CAVs in a highway merging scenario and employed an actor-critic architecture with a centralized critic and decentralized actors to avoid the problem of a non-stationary environment. Fares et al [30] applied a cooperative Q-learning algorithm based on a coordinated graph structure to optimize overall traffic congestion on a motorway through multiple ramp control.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Shi et al [28] proposed a cooperative strategy of longitudinal control for a mixed connected and automated traffic environment based on the DRL algorithm and enhanced performance for an entire mixed traffic flow. Nakka et al [29] proposed a decentralized multi-agent RL (MARL) framework for coordinating CAVs in a highway merging scenario and employed an actor-critic architecture with a centralized critic and decentralized actors to avoid the problem of a non-stationary environment. Fares et al [30] applied a cooperative Q-learning algorithm based on a coordinated graph structure to optimize overall traffic congestion on a motorway through multiple ramp control.…”
Section: Related Workmentioning
confidence: 99%
“…Nakka et al. [29] proposed a decentralized multi‐agent RL (MARL) framework for coordinating CAVs in a highway merging scenario and employed an actor‐critic architecture with a centralized critic and decentralized actors to avoid the problem of a non‐stationary environment. Fares et al.…”
Section: Introductionmentioning
confidence: 99%
“…Based on these merits, the DRL‐based CAV controllers (S. Chen et al., 2021; Chong et al., 2013; Guan et al., 2019; M. Li et al., 2020; Yipei Wang et al., 2021; M. Zhou et al., 2020) have been favored in recent years, most of which focus on distance tracking and energy efficiency. Furthermore, the multi‐agent reinforcement learning‐based CAV controllers (S. Chen et al., 2021; Ha et al., 2020; Nakka et al., 2021; Shi et al., 2021) can further improve control performance due to its cooperative decision‐making manner.…”
Section: Introductionmentioning
confidence: 99%
“…Zhou et al, 2020) have been favored in recent years, most of which focus on distance tracking and energy efficiency. Furthermore, the multi-agent reinforcement learning-based CAV controllers (S. Ha et al, 2020;Nakka et al, 2021;Shi et al, 2021) can further improve control performance due to its cooperative decision-making manner.…”
mentioning
confidence: 99%