2022 IEEE Intelligent Vehicles Symposium (IV) 2022
DOI: 10.1109/iv51971.2022.9827302
|View full text |Cite
|
Sign up to set email alerts
|

Tackling Real-World Autonomous Driving using Deep Reinforcement Learning

Abstract: In the typical autonomous driving stack, planning and control systems represent two of the most crucial components in which data retrieved by sensors and processed by perception algorithms are used to implement a safe and comfortable self-driving behavior. In particular, the planning module predicts the path the autonomous car should follow taking the correct high-level maneuver, while control systems perform a sequence of low-level actions, controlling steering angle, throttle and brake. In this work, we prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 34 publications
0
0
0
Order By: Relevance
“…RL enables the systems to learn from the data obtained from the environment to make the correct decision. RL algorithms are used for the decision making and maneuver execution systems like lane change and keeping [76][77][78][79][80][81], overtaking maneuvers [82], intersection and roundabout handling [83][84]. According to [76], there are two crucial components of autonomous driving systems -planning and control systems.…”
Section: Autonomous Drivingmentioning
confidence: 99%
See 1 more Smart Citation
“…RL enables the systems to learn from the data obtained from the environment to make the correct decision. RL algorithms are used for the decision making and maneuver execution systems like lane change and keeping [76][77][78][79][80][81], overtaking maneuvers [82], intersection and roundabout handling [83][84]. According to [76], there are two crucial components of autonomous driving systems -planning and control systems.…”
Section: Autonomous Drivingmentioning
confidence: 99%
“…RL algorithms are used for the decision making and maneuver execution systems like lane change and keeping [76][77][78][79][80][81], overtaking maneuvers [82], intersection and roundabout handling [83][84]. According to [76], there are two crucial components of autonomous driving systems -planning and control systems. The planning systems predict the path the self-driving car should take while the control systems are responsible for low-level actions like controlling steering angles, throttle, and break.…”
Section: Autonomous Drivingmentioning
confidence: 99%
“…The development of RL in the field of autonomous driving has transitioned from foundational models to advanced algorithms capable of addressing complex and dynamic driving tasks. Early RL methods focused on simple control tasks [10], laying the foundation for more complex methods such as Deep Q-Network (DQN) for higher-dimensional state and action spaces [11], Deterministic Policy Gradient (DDPG) [12,13], Proximal Policy Optimization (PPO) [14], Trust Region Policy Optimization (TRPO) [14], and Asynchronous Advantage Actor Critic (A3C) [15], among others [16]. These methods have been used in the field of autonomous driving and have demonstrated good performance [15,17,18].…”
Section: Introductionmentioning
confidence: 99%
“…Early RL methods focused on simple control tasks [10], laying the foundation for more complex methods such as Deep Q-Network (DQN) for higher-dimensional state and action spaces [11], Deterministic Policy Gradient (DDPG) [12,13], Proximal Policy Optimization (PPO) [14], Trust Region Policy Optimization (TRPO) [14], and Asynchronous Advantage Actor Critic (A3C) [15], among others [16]. These methods have been used in the field of autonomous driving and have demonstrated good performance [15,17,18]. However, despite the remarkable performance of autonomous driving products showcased by companies such as Waymo, Baidu Apollo, and others in regular traffic, their safety reports have documented numerous emergency takeover incidents when faced with unknown or complex situations.…”
Section: Introductionmentioning
confidence: 99%