2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES) 2019
DOI: 10.1109/icves.2019.8906299
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Car-Following Approach Based on Real-time Video Frames Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…They mostly considered data from a 3D-LiDAR (Light Detection and Ranging) mounted on board of the vehicle. To implement Simultaneous Localization and Mapping (SLAM) in AVs, LiDAR point clouds maps are coupled with camera-images and RADAR (Radio Detection and Ranging ) data [14], [15] to assist the control system of the AV to safely navigate through dynamic environments [16]. However, the LiDAR sensors are generally expensive and the resulting computation necessary to interpret, maintain, and fuse data in real-time is most likely going to need powerhungry onboard components such as graphics processing units (GPU) [17].…”
Section: Related Workmentioning
confidence: 99%
“…They mostly considered data from a 3D-LiDAR (Light Detection and Ranging) mounted on board of the vehicle. To implement Simultaneous Localization and Mapping (SLAM) in AVs, LiDAR point clouds maps are coupled with camera-images and RADAR (Radio Detection and Ranging ) data [14], [15] to assist the control system of the AV to safely navigate through dynamic environments [16]. However, the LiDAR sensors are generally expensive and the resulting computation necessary to interpret, maintain, and fuse data in real-time is most likely going to need powerhungry onboard components such as graphics processing units (GPU) [17].…”
Section: Related Workmentioning
confidence: 99%
“…The crowd-sourced data may be a potential stream of training data for autonomous vehicles. In conjunction with the data relevant to car-following models [50], computer vision systems can be used to detect other entities on the road. The combination of the data streams would allow vehicles to synthesize different types of data to consider the interactions and therefore a more complex feature space before making decisions, potentially leading to better performance.…”
Section: Data Fusionmentioning
confidence: 99%
“…Some methods use monitors rather than kinematics sensors to get end-to-end control. Q learning is employed to discrete the action of the vehicle to follow the vehicle [19]. Moreover, deep Q learning is used for agent to take continuous actions for car following [20].…”
Section: Introductionmentioning
confidence: 99%