With the continuous development of modern UAV technology and communication technology, UAV-to-ground communication relay has become a research hotspot. In this paper, a Multi-Agent Reinforcement Learning (MARL) method based on the ε-greedy strategy and multi-agent proximal policy optimization (MAPPO) algorithm is proposed to address the local optimization problem, improving the communication efficiency and task execution capability of UAV cluster control. This paper explores the path planning problem in multi-UAV-to-ground relay communication, with a special focus on the application of the proposed Mix-Greedy MAPPO algorithm. The state space, action space, communication model, training environment, and reward function are designed by comprehensively considering the actual tasks and entity characteristics such as safe distance, no-fly zones, survival in a threatened environment, and energy consumption. The results show that the Mix-Greedy MAPPO algorithm significantly improves communication probability, reduces energy consumption, avoids no-fly zones, and facilitates exploration compared to other algorithms in the multi-UAV ground communication relay path planning task. After training with the same number of steps, the Mix-Greedy MAPPO algorithm has an average reward score that is 45.9% higher than the MAPPO algorithm and several times higher than the multi-agent soft actor-critic (MASAC) and multi-agent deep deterministic policy gradient (MADDPG) algorithms. The experimental results verify the superiority and adaptability of the algorithm in complex environments.