For safe and efficient path planning of unmanned ground vehicles in complex 3D environment, this paper proposes an improved deep reinforcement learning algorithm (Dual Experience Dynamic Target DDQN, DEDT DDQN) to solve the problems of sparse reward convergence and over-estimation that are difficulties for traditional DDQN algorithms in complex maps. The algorithm improves the performance of the DDQN algorithm in dealing with complex environments by dividing the input quality experience and dynamically fusing the a priori knowledge of DDQN and average DDQN for network parameter training. For unstructured 3D environments, this paper adopts a path planning strategy based on the digital elevation model (DEM) considering environmental characteristics and time cost. Simulation experiments of the DEDT DDQN algorithm in 3D maps modeled on realistic environments show that the DEDT DDQN algorithm reduces the number of inflection points and the average slope change by 40% and 16.7%, respectively, and improves the performance of optimization searching as well as the convergence speed by 5.34% and 60%, respectively. The proposed improved algorithm and adopted strategy can be applied in two different types of maps, which verifies the effectiveness and robustness of the algorithm and strategy.