2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8968560
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Input for Deep Reinforcement Learning in Autonomous Driving

Abstract: The common pipeline in autonomous driving systems is highly modular and includes a perception component which extracts lists of surrounding objects and passes these lists to a high-level decision component. In this case, leveraging the benefits of deep reinforcement learning for high-level decision making requires special architectures to deal with multiple variable-length sequences of different object types, such as vehicles, lanes or traffic signs. At the same time, the architecture has to be able to cover i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
45
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(45 citation statements)
references
References 16 publications
0
45
0
Order By: Relevance
“…While human drivers of CVs cannot make decisions based on all of the rich data made available through connectivity, connected AVs (CAVs) can quickly process such large amounts of information and thereby exhibit superior driving performance. This has been confirmed by several researchers who argued that V2X connectivity capabilities can significantly enhance the operational performance of AVs (Duell et al, 2016;Kreidieh et al, 2018;Stern et al, 2018;Huegle et al, 2019;Dong et al, 2020b;Li et al, 2020b). These researchers demonstrated that the interactions between AVs and connected humandriven vehicles (CHDVs) present several opportunities where connectivity can enrich AV performance.…”
Section: Promoting Automation Using Connectivitymentioning
confidence: 67%
See 1 more Smart Citation
“…While human drivers of CVs cannot make decisions based on all of the rich data made available through connectivity, connected AVs (CAVs) can quickly process such large amounts of information and thereby exhibit superior driving performance. This has been confirmed by several researchers who argued that V2X connectivity capabilities can significantly enhance the operational performance of AVs (Duell et al, 2016;Kreidieh et al, 2018;Stern et al, 2018;Huegle et al, 2019;Dong et al, 2020b;Li et al, 2020b). These researchers demonstrated that the interactions between AVs and connected humandriven vehicles (CHDVs) present several opportunities where connectivity can enrich AV performance.…”
Section: Promoting Automation Using Connectivitymentioning
confidence: 67%
“…In reinforcement learning based autonomous driving systems, for example, the level of cooperation between vehicles in the traffic stream influences the level of benefits that automated or autonomous vehicles could earn from connectivity. Huegle et al (2019) developed a deep reinforcement learning (DRL) based autonomous driving system that communicates with connected human-driven vehicles in order to execute lane-change maneuvers. The researchers showed that a dynamic input of the information obtained via connectivity is useful in training an efficient autonomous driving model.…”
Section: Promoting Automation Using Connectivitymentioning
confidence: 99%
“…The dimension of the state representation can exponentially grow for complex scenarios when the number of vehicles n or intersecting lanes m increase. Another challenge here is that permutation of input elements can change which may cause the network to have different reactions for the same scenario with different input permutations [28].…”
Section: Scalable Reinforcement Learningmentioning
confidence: 99%
“…In order to address this problem, we use Deep sets [29] architectures that decouple the network size of machine learning algorithms from the number of input elements. Deep sets approach has already been applied to learn a lane change policy with DQN for highway scenarios in [28]. In this paper, we propose a Deep sets architecture for automated navigation at occluded merging and crossing scenarios (Figure 2).…”
Section: Scalable Reinforcement Learningmentioning
confidence: 99%
“…Combined with another sibling (AI technique reinforcement learning (RL)) to yield DRL, DL has also been applied to operational control and planning tasks in transportation such as traffic signal control (Wu et al., 2019) and pavement maintenance planning (Yao et al., 2020). Besides these successful applications, DRL has been used in multiple complex CAV driving control tasks including lane‐keeping and obstacle avoidance (El Sallab et al., 2017; S. Chen, Leng, et al., 2020), lane‐changing (Dong, Chen, Li, Du, et al., 2021; Dong, Chen, Li, Ha, et al., 2020; Huegle et al., 2020), merging maneuvers (Saxena et al., 2019), crossing traffic avoidance (Wang et al., 2020), and roundabout driving (J. Chen, Yuan, et al., 2019). DRL‐based controllers have a number of advantages that are significant and consequential.…”
Section: Introductionmentioning
confidence: 99%