Global routing is a crucial step in the design of Very Large-Scale Integration (VLSI) circuits. However, most of the existing methods are heuristic algorithms, which cannot conjointly optimize the subproblems of global routing, resulting in congestion and overflow. In response to this challenge, an enhanced Deep Reinforcement Learning- (DRL-) based global router has been proposed, which comprises the following effective strategies. First, to avoid the overestimation problem generated by
Q
-learning, the proposed global router adopts the Double Deep
Q
-Network (DDQN) model. The DDQN-based global router has better performance in wire length optimization and convergence. Second, to avoid the agent from learning redundant information, an action elimination method is added to the action selection part, which significantly enhances the convergence performance of the training process. Third, to avoid the unfair allocation problem of routing resources in serial training, concurrent training is proposed to enhance the routability. Fourth, to reduce wire length and disperse routing resources, a new reward function is proposed to guide the agent to learn better routing solutions regarding wire length and congestion standard deviation. Experimental results demonstrate that the proposed algorithm outperforms others in several important performance metrics, including wire length, convergence performance, routability, and congestion standard deviation. In conclusion, the proposed enhanced DRL-based global router is a promising approach for solving the global routing problem in VLSI design, which can achieve superior performance compared to the heuristic method and DRL-based global router.