A mobile robot path planning method based on improved deep reinforcement learning is proposed. First, in order to conform to the actual kinematics model of the robot, the continuous environmental state space and discrete action state space are designed. In addition, an improved deep Q-network (DQN) method is proposed, which takes the directly collected information as the training samples and combines the environmental state characteristics of the robot and the target point to be reached as the input of the network. DQN method takes the Q value at the current position as the output of the network model and uses ε -greedy strategy for action selection. Finally, the reward function combined with the artificial potential field method is designed to optimize the state-action space. The reward function solves the problem of sparse reward in the environmental state space and makes the action selection of the robot more accurate. Experiments show that compared with the classical DQN method, the average loss function value is reduced by 36.87% and the average reward value is increased by 12.96%, which can effectively improve the working efficiency of mobile robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.