In this study, we propose an innovative approach to address a chronological planning problem involving the multiple agents required to complete tasks under precedence constraints. We model this problem as a stochastic game and solve it with multi-agent reinforcement learning algorithms. However, these algorithms necessitate relearning from scratch when confronted with changes in the chronological order of tasks, resulting in distinct stochastic games and consuming a substantial amount of time. To overcome this challenge, we present a novel framework that incorporates meta-learning into a multi-agent reinforcement learning algorithm. This approach enables the extraction of meta-parameters from past experiences, facilitating rapid adaptation to new tasks with altered chronological orders and circumventing the time-intensive nature of reinforcement learning. Then, the proposed framework is demonstrated through the implementation of a method named Reptile-MADDPG. The performance of the pre-trained model is evaluated using average rewards before and after fine-tuning. Our method, in two testing tasks, improves the average rewards from −44 to −37 through 10,000 steps of fine-tuning in two testing tasks, significantly surpassing the two baseline methods that only attained −51 and −44, respectively. The experimental results demonstrate the superior generalization capabilities of our method across various tasks, thus constituting a significant contribution towards the design of intelligent unmanned systems.