Various emergencies occur frequently, posing threats and challenges to people's lives and social security. In consequence, the evacuation of multi-Agent has become a significant part of the emergency response process. However, a few existing works only focus on the evacuation of a small number of agents, which does not consider the problem of multi-Agent cooperation caused by the increase of the number of agents and the impact of emergencies. Therefore, a framework for event-driven multi-Agent evacuation is proposed in this paper, which includes three parts: event collection, event sending, and task execution. During task execution, agents are divided into groups and select the leader in the group, while other agents in the group move with the leader. Then, the reinforcement learning algorithm Space Multi-Agent Deep Deterministic Policy Gradient (SMADDPG), proposed in this paper, is used for path planning. In addition, the state, action and reward based on the Markov game are designed, and an environment with emergencies is presented as agents evacuation scenario. The experiment results show that the method proposed can shorten the length of path, and improve the interoperability between multi-Agent when emergencies occur, which can provide decision-making reference for emergency departments to formulate evacuation plans.