Generative adversarial networks have shown remarkable success in image synthesis, especially StyleGANs. Equipped with delicate and specific designs, StyleGANs are capable of synthesizing high-resolution and high-fidelity images. Previous works aiming at improving StyleGANs mainly focus on modifying the architecture of StyleGANs or transferring knowledge from other domains. However, the knowledge contained in StyleGANs trained in the same domain is still unexplored. We aim to further boost the performance of StyleGANs from the perspective of knowledge distillation, i.e., improving uncompressed StyleGANs with the aid of teacher StyleGANs trained in the same domain. Motivated by the implicit distribution contained in the pretrained teacher discriminator, we propose to exploit the teacher discriminator to additionally supervise the student generator of StyleGANs so as to leverage the knowledge in the teacher discriminator. With the proposed distillation scheme, our method can outperform original StyleGANs on several large-scale datasets, achieving state-of-the-art on AFHQv2.