The generative adversarial network (GAN) is one of the most promising methods in the field of unsupervised learning. Model developers, users, and other interested people are highly concerned about the GAN mechanism where the generative model and the discriminative model learn from each other in a gameplay manner, which generates a causal relationship among output features, internal network structure, feature extraction process, and output results. Through the study of the interpretability of GANs, the validity, reliability, and robustness of the application of GANs can be verified, and the weaknesses of the GANs in specific applications can be diagnosed, which can provide support for designing better network structures. It can also improve security and reduce the decision‐making and prediction risks brought by GANs. In this article, the study of the interpretability of GANs is explored, and ways of the evaluation of the application effect of GAN interpretability techniques are analyzed. Besides, the effect of interpretable GANs in fields such as medical treatment and military is discussed, and current limitations and future challenges are demonstrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.