2023
DOI: 10.1109/tase.2022.3168621
|View full text |Cite
|
Sign up to set email alerts
|

Unified Automatic Control of Vehicular Systems With Reinforcement Learning

Abstract: systems considered in this article, the presented methodology emphasizes ease of application within any simulated vehicular system while minimizing manual efforts by the practitioner. The control inputs consist of local information around each automated vehicle, while the control outputs are commands for longitudinal acceleration and lateral lane change. Experimental results are presented for relatively small simulated traffic systems, though the methodology can be adapted to larger vehicular systems with mino… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Yan et al [ 82 ] presented a unified methodology for designing multi-agent vehicular systems and developing a corresponding CACC system using SOTA DRL algorithms. The performant behaviors discovered through the DRL method were manually analyzed, and simple controllers inspired by these behaviors were benchmarked in various road traffic scenarios such as single ring, double ring, figure of eight, highway ramp, and intersections.…”
Section: Marl For Cavsmentioning
confidence: 99%
“…Yan et al [ 82 ] presented a unified methodology for designing multi-agent vehicular systems and developing a corresponding CACC system using SOTA DRL algorithms. The performant behaviors discovered through the DRL method were manually analyzed, and simple controllers inspired by these behaviors were benchmarked in various road traffic scenarios such as single ring, double ring, figure of eight, highway ramp, and intersections.…”
Section: Marl For Cavsmentioning
confidence: 99%
“…ML approaches, including RL and Meta-learning, are introduced to TSC research. RL has been widely used in many fields such as Autopilot 26 , Natural Language Processing 27 and Robot Control 28 , and has achieved satisfactory results. Meanwhile, RL is adopted to enhance adaptive TSC approaches due to its data-driven nature.…”
Section: Related Workmentioning
confidence: 99%
“…Simulation experiments conducted in warehousing scenarios validated the performance of the proposed model in optimizing transportation routes for AGV-UAV collaboration. Finally, Yan et al [52] introduced a methodology for optimizing control strategies in vehicular systems using DRL. The methodology utilized a variable-agent, multi-task approach and was experimentally validated on mixed autonomy traffic systems.…”
Section: Advancements In Rl With Simulation For Warehouse Operationsmentioning
confidence: 99%