2022
DOI: 10.3390/wevj13120239
|View full text |Cite
|
Sign up to set email alerts
|

Design of Obstacle Avoidance for Autonomous Vehicle Using Deep Q-Network and CARLA Simulator

Abstract: In this paper, we propose a deep Q-network (DQN) method to develop an autonomous vehicle control system to achieve trajectory design and collision avoidance with regard to obstacles on the road in a virtual environment. The intention of this work is to simulate a case scenario and train the DQN algorithm in a virtual environment before testing it in a real scenario in order to ensure safety while reducing costs. The CARLA simulator is used to emulate the motion of the autonomous vehicle in a virtual environmen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…CARLA focusses on the testing of autonomous vehicles in realistic city settings. For example, a team of researchers applied the DQN method and used the CARLA platform to emulate the motion of a self-driving vehicle within a simulation environment, which includes an obstacle vehicle [45]. Other authors also used the given simulator and collected extensive data on human drivers' responses to road obstacles to apply behaviour-cloning network architecture with the modified loss [46].…”
Section: State Of the Art Reviewmentioning
confidence: 99%
“…CARLA focusses on the testing of autonomous vehicles in realistic city settings. For example, a team of researchers applied the DQN method and used the CARLA platform to emulate the motion of a self-driving vehicle within a simulation environment, which includes an obstacle vehicle [45]. Other authors also used the given simulator and collected extensive data on human drivers' responses to road obstacles to apply behaviour-cloning network architecture with the modified loss [46].…”
Section: State Of the Art Reviewmentioning
confidence: 99%
“…In terms of path-planning algorithms, many researchers have improved many classical algorithms, including the heuristic search A* algorithm, rapidly-exploring random trees (RRT) algorithm, bionic ant colony algorithm, and machine learning Q-learning algorithm [19][20][21][22]. The principle of the RRT algorithm is to start from the starting point, adopt a tree-shaped branch structure, randomly sample the map, find the point closest to the sampling point and an accessible connection in the path tree, connect the point with the sampling point, and add the sampling point to the path tree until the area near the end point is explored [23].…”
Section: Introductionmentioning
confidence: 99%
“…Óscar et al [13,14] used the ROS framework through deep reinforcement learning to verify autonomous driving applications on the CARLA simulator. Terapaptommakol et al [15] proposed using a deep Q-network method in the CARLA simulator to develop an autonomous vehicle control system that achieves trajectory design and collision avoidance with obstacles on the road in a virtual environment. This approach allows for the avoidance of collisions with obstacles and enables the creation of optimized trajectories in a simulated environment.…”
Section: Introductionmentioning
confidence: 99%