Generative adversarial networks (GANs) have achieved great success and become more and more popular in recent years. However, understanding of the min-max game in GANs training is still limited. In this paper, we first utilize information game theory to analyze the min-max game in GANs and introduce a new viewpoint on the GANs training that the min-max game in existing GANs is unfair during training, leading to sub-optimal convergence. To tackle this, we propose a novel GAN called Information Gap GAN (IG-GAN), which consists of one generator (G) and two discriminators (D 1 and D 2 ). Specifically, we apply different data augmentation methods to D 1 and D 2 , respectively. The information gap between different data augmentation methods can change the information received by each player in the min-max game and lead to all three players G, D 1 and D 2 in IGGAN obtaining incomplete information, which improves the fairness of the min-max game, yielding better convergence. We conduct extensive experiments for largescale and limited data settings on several common datasets with two backbones, i.e., BigGAN and StyleGAN2. The results demonstrate that IGGAN can achieve a higher Inception Score (IS) and a lower Fréchet Inception Distance (FID) compared with other GANs. Codes are available at https://github.com/zzhang05/IGGAN