In recent years, unmanned aerial vehicles (UAVs) have been considered for many applications, such as disaster prevention and control, logistics and transportation, and wireless communication. Most UAVs need to be manually controlled using remote control, which can be challenging in many environments. Therefore, autonomous UAVs have attracted significant research interest, where most of the existing autonomous navigation algorithms suffer from long computation time and unsatisfactory performance. Hence, we propose a Deep Reinforcement Learning (DRL) UAV path planning algorithm based on cumulative reward and region segmentation. Our proposed region segmentation aims to reduce the probability of DRL agents falling into local optimal trap, while our proposed cumulative reward model takes into account the distance from the node to the destination and the density of obstacles near the node, which solves the problem of sparse training data faced by the DRL algorithms in the path planning task. The proposed region segmentation algorithm and cumulative reward model have been tested in different DRL techniques, where we show that the cumulative reward model can improve the training efficiency of deep neural networks by 30.8% and the region segmentation algorithm enables deep Q-network agent to avoid 99% of local optimal traps and assists deep deterministic policy gradient agent to avoid 92% of local optimal traps.