2021 IEEE Intelligent Vehicles Symposium (IV) 2021
DOI: 10.1109/iv48863.2021.9575135
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Intersection Handling using Multi-Agent Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…In contrast to supervised learning, RL approaches evade the requirement for large datasets by instead exploring possible actions in a simulated environment and exploiting a reward signal to learn the desired behavior. In [13], a driving policy for controlling the acceleration and steering angle is trained through RL that is applied to multiple vehicles in a common simulated environment. As there is no explicit communication between the different vehicles' policies, no cooperation is shown in traffic.…”
Section: B Machine Learning-based Planningmentioning
confidence: 99%
“…In contrast to supervised learning, RL approaches evade the requirement for large datasets by instead exploring possible actions in a simulated environment and exploiting a reward signal to learn the desired behavior. In [13], a driving policy for controlling the acceleration and steering angle is trained through RL that is applied to multiple vehicles in a common simulated environment. As there is no explicit communication between the different vehicles' policies, no cooperation is shown in traffic.…”
Section: B Machine Learning-based Planningmentioning
confidence: 99%
“…Machine learning experiences high research interest for the application in prediction as well as planning for a single ego vehicle in automated driving [12]. Reinforcement learning (RL) is used for single ego behavior planning, as demonstrated for urban intersections [13] or highway lane changes [14]. In the latter work, the authors propose to use a graph-based representation of the semantic environment of the ego vehicle and a fitting GNN for processing.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to supervised learning, RL approaches evade the requirement for large datasets by instead exploring possible actions in a simulated environment and exploiting a reward signal to learn the desired behavior. In [12], a driving policy for controlling the acceleration and steering angle is trained through RL that is applied to multiple vehicles in a common simulated environment. As there is no explicit communication between the different vehicles' policies, no cooperation is shown in traffic.…”
Section: B Machine Learning-based Planningmentioning
confidence: 99%