“…To make GANs easy to be deployed on computational resource limited devices, extensive works have been proposed to obtain lightweight GANs. The mainstreaming approach is to inherit the model compression techniques developed for image-classification task to compress GANs, such as weight pruning [14], weight quantization [15], channel pruning [16], [17], [18], [19], lightweight GAN architecture search/design [20], [21], [22], [23], evolutionary compression [24], [25], and knowledge distillation (KD) [26], [27], [28], [29], [30], [31], [32], [33]. However, most of the above works focus on compressing conditional (cycle) GANs for image-to-image generation tasks, scarce works have been proposed for compressing vanilla GANs except recent works [8], [17], [18], [27], [34].…”