Generative adversarial networks (GANs) are able to produce realistic images. However, GANs may suffer mode collapse in their output data distribution. Here, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators with one generator for improving generative performance. First, a comprehensive perspective is adopted to understand why mode collapse occurs. Second, an array of cooperative realness discriminators is introduced into the GAN framework to combat mode collapse and explore discriminator roles ranging from a formidable adversary to a forgiving teacher. Third, two types of simple yet effective regularization are proposed for generating realistic and diverse images. Experiments on various datasets show the effectiveness of the GAN compared to previous methods in alleviating mode collapse and improving the quality of the generated samples.