Recently, graph neural network (GNN), as an efficient representation learning method [8], has been widely utilized to build communication learning mechanism of MARL, which generally regards agents as nodes in the graph, with the communication channels corresponding to edges. Many stateof-the-art MARL methods fall into this GNN-based communication paradigm [9], [10]. However, most GNN-based communication learning methods are not specially designed for heterogeneous scenarios, where agents have different observation spaces or action sets. Therefore, these methods can not effectively process and transmit heterogeneous feature information, which leads to inefficient communication learning and affects action coordination.To solve these problems, we present a Communication Learning mechanism of multi-Agent Reinforcement learning (CLAR) for heterogeneous scenarios. The proposed mechanism utilizes heterogeneous GNNs to model the heterogeneous agents and fuse feature information of neighboring agents to obtain high-level message representations. Besides, the proposed mechanism leverages mutual information (MI) optimization to obtain high-quality message representations for action coordination. Furthermore, we present a MARL framework that integrates the value factorization and the proposed communication learning mechanism. This framework can maintain the advantages of the stability and scalability of the value factorization methods, and promote better action coordination between agents by effectively processing and utilizing heterogeneous feature information. The following are the primary contributions of the proposed method, 1) We present a MARL framework that integrates communication learning mechanism and value factorization methods for heterogeneous scenarios, which solved the communication learning challenge of heterogeneous scenarios and the action discoordination issue of value factorization methods.2) We first introduce the MI between the local values and the message representations in MARL. We use the MI maximization to learn the most valuable and expressive information from different classes of agents for better action coordination.3) We design the heterogeneous GNN to learn heterogeneous multi-agent communication, which efficiently models the heterogeneous scenarios and achieves the fusion and transmission of heterogeneous information.