In this paper, we propose a novel end-to-end model for fusing medical images characterizing structural information, i.e., I S , and images characterizing functional information, i.e., I F , of different resolutions, by using a multi-generator multi-discriminator conditional generative adversarial network (MGMDcGAN). In the first cGAN, the generator aims to generate a real-like fused image based on a specifically designed content loss to fool two discriminators, while the discriminators aim to distinguish the structure differences between the fused image and source images. On this basis, we employ the second cGAN with a mask to enhance the information of dense structure in the final fused image, while preventing the functional information from being weakened. Consequently, the final fused image is forced to concurrently keep the structural information in I S and the functional information in I F. In addition, as a unified method, MGMDcGAN can be applied to different kinds of medical image fusion, i.e., MRI-PET, MRI-SPECT, and CT-SPECT, where MRI and CT are two kinds of I S of high resolution, PET and SPECT are typical kinds of I F of low resolution. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our MGMDcGAN over the state-of-the-art. INDEX TERMS Medical image fusion, generative adversarial network, different resolutions, end-to-end, unified method.