2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9562006
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning for Autonomous Driving with Latent State Inference and Spatial-Temporal Relationships

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3
1

Relationship

2
8

Authors

Journals

citations
Cited by 43 publications
(15 citation statements)
references
References 22 publications
0
15
0
Order By: Relevance
“…RL has seen a resurgence due to the success of embedding deep models in various components of the learning process (see the overview in Henderson et al, 2018). Given that many agent-based systems, such as autonomous vehicle control systems and collective animal movement, are formulated in space and time, it is natural to consider RL for such problems (e.g., Ma et al, 2021, Tampuu et al, 2017. However, for many such systems it is challenging to define the local costs or rewards that control agent behavior a priori, which has led to interest in inverse reinforcement learning (IRL), whereby one uses observed system behavior to learn the underlying costs or rewards (e.g., Ng & Russell, 2000).…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…RL has seen a resurgence due to the success of embedding deep models in various components of the learning process (see the overview in Henderson et al, 2018). Given that many agent-based systems, such as autonomous vehicle control systems and collective animal movement, are formulated in space and time, it is natural to consider RL for such problems (e.g., Ma et al, 2021, Tampuu et al, 2017. However, for many such systems it is challenging to define the local costs or rewards that control agent behavior a priori, which has led to interest in inverse reinforcement learning (IRL), whereby one uses observed system behavior to learn the underlying costs or rewards (e.g., Ng & Russell, 2000).…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Graph neural networks (GNN) have achieved outstanding performance on different types of tasks in various domains. Many variants of the model architecture and message passing rules have been proposed, such as GCN [21], Spectral GCN [22], Spatial GCN [22], ChebNet [23], GraphSAGE [24], [25] and GAT [26]. In recent years, researchers attempt to leverage GNN to incorporate relational inductive biases in the learning based models to solve various real-world tasks such as traffic flow forecasting and trajectory prediction [7], [14], [27]- [31].…”
Section: B Graph Neural Networkmentioning
confidence: 99%
“…In [64], an intention inference method of vehicles based on the Support Vector Machine (SVM) has been proposed. In [65], reinforcement learning is proposed to achieve autonomous driving at intersections. In [66], transfer learning method is used to classify intersections In [67], an autonomous driving simulation experiment is carried out through conditional imitation learning.…”
Section: )The Learning-based Strategymentioning
confidence: 99%