2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
DOI: 10.1109/iros47612.2022.9982041
|View full text |Cite
|
Sign up to set email alerts
|

Cola-HRL: Continuous-Lattice Hierarchical Reinforcement Learning for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…To improve model generalizability, Duan et al [138] split mobility responsibilities into three different models using a centralised policy network. Building on earlier research, Cola-HRL [139] combines a continuous-lattice state space representation, low-level controller, and high-level planner to provide higher making decisions efficiency across a range of scenarios as in comparison with state-of-the-art techniques.…”
Section: ) Hierarchical Rlmentioning
confidence: 99%
“…To improve model generalizability, Duan et al [138] split mobility responsibilities into three different models using a centralised policy network. Building on earlier research, Cola-HRL [139] combines a continuous-lattice state space representation, low-level controller, and high-level planner to provide higher making decisions efficiency across a range of scenarios as in comparison with state-of-the-art techniques.…”
Section: ) Hierarchical Rlmentioning
confidence: 99%
“…The master policy network is trained to select the appropriate driving task, this policy greatly enhances the generalizability and effectiveness of the model. For the purpose of further improving decision quality in complex scenarios, Cola-HRL [112] is presented based on [110], this method consists of three main components: a high-level planner, a low-level controller, and a continuous-lattice representation of the state space. Both the planner and controller use the state space to generate high-quality decisions.…”
Section: ) Value-based Reinforcement Learningmentioning
confidence: 99%
“…Conventional rule-based approaches [3] have achieved some success in industry but require extensive human engineering to deal with diverse real-world scenarios. Recent advances in deep learning techniques have motivated researchers [4,5,6] to employ neural networks to model complex driving policies. Imitation learning (IL) from human drivers' demonstrations is a promising solution for learning these policies, as experienced drivers can handle even the most difficult situations, and their driving data can be easily collected at scale.…”
Section: Introductionmentioning
confidence: 99%