“…Among various types of generative models, such as variational auto-encoder (VAEs) [16,28,46,55], flow-based model [27,47], diffusion model [10,17], etc., GAN [11] has received wide attention due to its impressive performance on both unconditional synthesis [22,[24][25][26] and conditional synthesis [6,36,49,64]. Early studies on interpreting GANs [5,8,51,62] suggest that, a well-learned GAN generator has encoded rich knowledge that can be promising applied to various downstream tasks, including attribute editing [2,3,20,33,62,63,67], image processing [12,18,40,48,68], superresolution [7,35], image classification [61], semantic segmentation [1,32,54,60,65], and visual alignment [44]. Existing interpretation approaches usually focus on the relationship between the latent space and the image space [51,58,62,…”