Large language models (LLMs) have shown significant promise in single-document summarization (SDS). However, there has been limited exploration in multi-document summarization (MDS). The primary challenge in MDS lies in effectively navigating the intricate and complex relationships between multiple documents, including issues such as contradictions, redundancies, and complementarities. Taking inspiration from chain-of-thought (CoT) and how human approaches to MDS, this paper proposes a novel MDS method called Attend-Arrange-Abstract CoT (3A-COT). The method comprises three critical stages. Attend: To alleviate redundancies between multiple documents, key information should be focused first. Therefore, we design an Attend-prompt to extract key elements from each document and generate key information based on these key elements. Arrange: Considering the contradictions and complementarities between these key information, we design an Arrange-prompt to arrange the relationships and connections between these key information across multiple documents to form reasonable logic relationships. Abstract: Finally, an Abstract-prompt is designed to guide the generation of a summary based on the arranged key information. To evaluate 3A-COT, we conduct experiments on four MDS test sets: Multi-News, Human-edited Multi-News, WCEP-10, and Multi-XScience. Experimental results show that 3A-COT outperforms other LLM-based MDS methods. Furthermore, in-depth analysis experiments are conducted to evaluate 3A-COT's efficacy in handling contradictions, redundancies, and complementarities.