The artificial potential field approach is an efficient path planning method. However, to deal with the local-stable-point problem in complex environments, it needs to modify the potential field and increases the complexity of the algorithm. This study combines improved black-hole potential field and reinforcement learning to solve the problems which are scenarios of local-stable-points. The blackhole potential field is used as the environment in a reinforcement learning algorithm. Agents automatically adapt to the environment and learn how to utilize basic environmental information to find targets. Moreover, trained agents adopt variable environments with the curriculum learning method. Meanwhile, the visualization of the avoidance process demonstrates how agents avoid obstacles and reach the target. Our method is evaluated under static and dynamic experiments. The results show that agents automatically learn how to jump out of local stability points without prior knowledge.