2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8916924
|View full text |Cite
|
Sign up to set email alerts
|

Cooperation-Aware Reinforcement Learning for Merging in Dense Traffic

Abstract: Decision making in dense traffic can be challenging for autonomous vehicles. An autonomous system only relying on predefined road priorities and considering other drivers as moving objects will cause the vehicle to freeze and fail the maneuver. Human drivers leverage the cooperation of other drivers to avoid such deadlock situations and convince others to change their behavior. Decision making algorithms must reason about the interaction with other drivers and anticipate a broad range of driver behaviors. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
66
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 97 publications
(67 citation statements)
references
References 21 publications
0
66
0
1
Order By: Relevance
“…The results are consistent with current empirical data showing that 3D audiovisual artifacts in the virtual learning environment served as rich resources for collaboration by students. [7,34].…”
Section: Discussionmentioning
confidence: 99%
“…The results are consistent with current empirical data showing that 3D audiovisual artifacts in the virtual learning environment served as rich resources for collaboration by students. [7,34].…”
Section: Discussionmentioning
confidence: 99%
“…11), where the ego vehicle needs to find the acceptable gap between two vehicles to get on the highway. In the simplest approach, it is eligible to learn the longitudinal control, where the agent reaches this position, as can be seen in [19], [58], [91]. Other papers, like [82] use full steering and acceleration control.…”
Section: Mergingmentioning
confidence: 99%
“…An exciting addition can be examined in [19], where the surrounding vehicles act differently, as there are cooperative and non-cooperative drivers among them. They trained their agents with the knowledge about cooperative behavior, and also compared the results with three differently built MTCS planners.…”
Section: Mergingmentioning
confidence: 99%
“…Policy Gradient. Since, there are major gaps between simulated and real environments that make it difficult to train models, DRL works very well in closed environments like video games, but it is difficult to apply to real-world environments [74]- [76].…”
Section: ) Reinforcement Learningmentioning
confidence: 99%