2022
DOI: 10.1109/jsen.2022.3146307
|View full text |Cite
|
Sign up to set email alerts
|

Learning Automated Driving in Complex Intersection Scenarios Based on Camera Sensors: A Deep Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(2 citation statements)
references
References 40 publications
0
1
0
Order By: Relevance
“…However, the oppression of data efficiency compels the regular CNN visual encoder to extract object features unsteadily. In consideration of the outstanding performance of the Transformer, many works have studied the combination of Transformer and RL [ 35 , 36 , 37 , 38 , 39 ]. This type of combination made some improvements in partial visual-based tasks, but the preponderance of ViT is not steady and even worse than the original visual encoder which may be due to the massive data requirements in general tasks training.…”
Section: Related Workmentioning
confidence: 99%
“…However, the oppression of data efficiency compels the regular CNN visual encoder to extract object features unsteadily. In consideration of the outstanding performance of the Transformer, many works have studied the combination of Transformer and RL [ 35 , 36 , 37 , 38 , 39 ]. This type of combination made some improvements in partial visual-based tasks, but the preponderance of ViT is not steady and even worse than the original visual encoder which may be due to the massive data requirements in general tasks training.…”
Section: Related Workmentioning
confidence: 99%
“…In general, these methods use the design of an object to create boundary containers for recognizing the objects based on the environment [23,24]. However, camera sensors suffer from various lighting conditions levels and have insufficient knowledge of regions, directions, object shape, and structure, resulting in inaccurate object-area identifications [25,26].…”
Section: Introductionmentioning
confidence: 99%