2021 IEEE International Intelligent Transportation Systems Conference (ITSC) 2021
DOI: 10.1109/itsc48978.2021.9564720
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Drive at Unsignalized Intersections using Attention-based Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…Gonzalez et al 22 reported a human-like decision approach using Partially Observable Markov Decision Process (POMDP), which can mimic the human abilities of anticipating surrounding drivers' intentions in highway driving. Seong et al 23 proposed an attention-based deep reinforcement learning framework for interactive driving at unsignalized intersections, which can realize human-likeness by learning to focus on spatially and temporally important features. Another way to realize human-likeness is to incorporate human decision rules, styles, or preferences in mechanism-based algorithms.…”
Section: Mechanism-basedmentioning
confidence: 99%
“…Gonzalez et al 22 reported a human-like decision approach using Partially Observable Markov Decision Process (POMDP), which can mimic the human abilities of anticipating surrounding drivers' intentions in highway driving. Seong et al 23 proposed an attention-based deep reinforcement learning framework for interactive driving at unsignalized intersections, which can realize human-likeness by learning to focus on spatially and temporally important features. Another way to realize human-likeness is to incorporate human decision rules, styles, or preferences in mechanism-based algorithms.…”
Section: Mechanism-basedmentioning
confidence: 99%
“…In recent years, considerable research has been devoted to the problems of decision-making and interaction of AV in intersection scenarios. These studies have employed various approaches, including ruled-based methods [12], gametheoretic methods [5], [13], and data-driven techniques [8], [9], [14], in which RL is recognized as a flexible, efficient, and potent method. However, its widespread implementation is hindered by several obstacles, one of which is training a RL model that can effectively manage a range of driving situations and decision-making tasks [8].…”
Section: A Decision-making Of Av At Intersectionmentioning
confidence: 99%
“…Reinforcement learning (RL), due to its exceptional learning capabilities and computational efficiency, has been widely applied in the design of decision-making algorithms for AV [7]- [9]. However, these methods mostly use a single model to handle different tasks, and utilizing a single model to cope with multiple autonomous driving decision-making scenarios and tasks remains a significant challenge for RL [8].…”
Section: Introductionmentioning
confidence: 99%
“…To mitigate the threat of potential vehicle failures to traffic safety and efficiency, Pei et al constructed a rule-based fault-tolerant cooperative driving strategy for signal-free intersections by modeling potential vehicle failure types to balance traffic safety and efficiency [2]. However, rule-based methods often employ conservative control schemes to avoid conflicts [3], which may not effectively maximize the utilization of space resources at intersection. Mathematical optimization techniques typically establish mathematical models based on specific optimization objectives and obtain optimal traffic control strategies by solving the optimal solution.…”
Section: Introductionmentioning
confidence: 99%
“…This alleviates computational load, leading to a more efficient training process. (3) Simulation experiments are conducted under low, medium, and high traffic flow conditions to validate the efficiency, safety, and driving comfort of the proposed algorithm.…”
Section: Introductionmentioning
confidence: 99%