When a lunar assisted robot helps an astronaut turn over or transports the astronaut from the ground, the trajectory of the robot’s dual arms should be automatically planned according to the unstructured environment on the lunar surface. In this paper, a dual-arm control strategy model of a lunar assisted robot based on hierarchical reinforcement learning is proposed, and the trajectory planning problem is modeled as a two-layer Markov decision process. In the training process, a reward function design method based on the idea of the artificial potential field method is proposed, and the reward information is fed back in a dense reward method, which significantly reduces the invalid exploration space and improves the learning efficiency. Large-scale tests are carried out in both simulated and physical environments, and the results demonstrate the effectiveness of the method proposed in this paper. This research is of great significance in respect of human–robot interaction, environmental interaction, and intelligent control of robots.