The current artificial intelligence (AI) infrastructure widely employs remote direct memory access (RDMA) protocol for high-performance communication in networks, utilizing Reliable Connection (RC)-based Queue Pairs (QP) to ensure end-to-end correct and ordered data transmission. However, as the scale of AI infrastructure continues to expand, this RC-based QP communication mechanism faces deficiencies in scalability and is prone to congestion, resulting in degraded network transfer performance. In this paper, we propose an optimized RDMA QP communication mechanism to address scalability and congestion issues in hyper-scale AI infrastructure networks. Firstly, we replace RC-based QPs with Reliable Datagram (RD)-based QPs and propose a new reliable mechanism to address scalability problems, eliminating the need for repetitive QP establishment by AI processes during external communication. Additionally, to mitigate congestion caused by a single path, we implement multipath data transmission by introducing a new unordered reception method in the network software stack. Through experiments and simulation tests, the optimized RDMA QP communication in large-scale AI infrastructure exhibits exceptional scalability and significantly reduces the occurrence of congestion, resulting in an overall network performance improvement of over 15%.