“…More specifically, as the authors in [9] expressed one of the most important future direction is to improve theoretical aspects of GANs to solve problems Stackelberg GAN [46], [47], [45], MGAN [65], DDL-GAN [69], GMAN [68], MD-GAN [70], [73], DRAGAN [49], Fictitious GAN [27], [63], D2GAN [72] FID Stackelberg GAN [46], DDL-GAN [69], Microbatch GAN [71], MD-GAN [70], [74], FedGAN [51], [ Diversity-promoting GAN [56], SeqGAN [55] Classification Scores [52], [15], [60], CS-GAN [4] Others [52], ORGAN [59], OptiGAN [62] such as model collapse, non-convergence, and training difficulties. Although there have many works on the theory aspects, most of the current training strategies are based on the optimization theory, whose scope is restricted to local convergence due to the non-convexity, and the utilization of game theory techniques is still in its infancy.…”