“…However, the generator might miss some modes of the distribution even after reaching the equilibrium as it can simply fool the discriminator by generating from only few modes of the real distribution (Goodfellow, 2016;Che et al, 2017;Chen et al, 2016;Salimans et al, 2016), and hence producing a limited diversity in samples. To address this problem, the literature explores two main approaches: Improving GAN learning to practically reach a better optimum Metz et al, 2017;Salimans et al, 2016;Gulrajani et al, 2017;Berthelot et al, 2017), or explicitly forcing GANs to produce various modes by design (Chen et al, 2016;Ghosh et al, 2017;Durugkar et al, 2017;Che et al, 2017;Liu & Tuzel, 2016). We hereby follow the latter strategy and propose a new way of dealing with GAN mode collapse.…”