2020 IEEE Intelligent Vehicles Symposium (IV) 2020
DOI: 10.1109/iv47402.2020.9304542
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Task Reinforcement Learning Approach for Navigating Unsignalized Intersections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(24 citation statements)
references
References 11 publications
0
24
0
Order By: Relevance
“…It gains significant quantitative improvements compared with DQN baseline. In [10], unsignalized intersection navigation task is modeled as a multi-task RL problem, in which turning left, turning right, and going straight are considered as specific sub-tasks. Through a multi-task learning framework, the agent learns to handle three navigating tasks at the same time and shows a competitive performance with single-task agents.…”
Section: Intersection Navigationmentioning
confidence: 99%
See 2 more Smart Citations
“…It gains significant quantitative improvements compared with DQN baseline. In [10], unsignalized intersection navigation task is modeled as a multi-task RL problem, in which turning left, turning right, and going straight are considered as specific sub-tasks. Through a multi-task learning framework, the agent learns to handle three navigating tasks at the same time and shows a competitive performance with single-task agents.…”
Section: Intersection Navigationmentioning
confidence: 99%
“…An optimal correction value of original dangerous action will help with the enhancement in safety without losing much efficiency. In this paper, the assumption that ego vehicle in unsignalized intersections has motion planning information will be hold similarly with [10,11]. We aim to investigate the multi-task unsignalized intersection navigation problem in dense traffic including turning left, going straight, and turning right.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In Table I, we compare our approach with the current stateof-the-art in navigating unsignaled intersections, roundabouts, and merging scenarios on the basis of optimality guarantees, multi-agent versus single-agent planning (MAP), description of action space (AS), incentive compatibility (IC), and realworld applicability. DRL-based methods [2], [19], [20], [25], [26] learn a navigation policy using the notion of expected reward received by an agent from taking a particular action in a particular state. This policy is learned from trajectories obtained via traffic simulators using Q-learning [27] and is very hard as well as expensive to train.…”
Section: Prior Workmentioning
confidence: 99%
“…Capasso et al [2] use additional signals such as traffic signs (stop, yield, none) to regulate the movement and actions of other agents. In terms of real world applications, Kai et al [20] learn a unified policy for multiple tasks and also demonstrate their approach on a real robot.…”
Section: Prior Workmentioning
confidence: 99%