Deep generative models including generative adversarial networks (GANs) are powerful unsupervised tools in learning the distributions of data sets. Building a simple GAN architecture in PyTorch and training on the CANDELS data set, we generate galaxy images with the Hubble Space Telescope (HST) resolution starting from a noise vector. We proceed by modifying the GAN architecture to improve Subaru Hyper Suprime-Cam (HSC) ground-based images by increasing their resolution to the HST resolution. We use the super-resolution GAN on a large sample of blended galaxies, which we create using CANDELS cutouts. In our simulated blend sample, ∼20% would unrecognizably be blended even in the HST-resolution cutouts. In the HSC-like cutouts this fraction rises to ∼90%. With our modified GAN we can lower this value to ∼50%. We quantify the blending fraction in the high, low, and GAN resolutions over the whole manifold of angular separation, flux ratios, sizes, and redshift difference between the two blended objects. The two peaks found by the GAN deblender result in improvement by a factor of 10 in the photometry measurement of the blended objects. Modifying the architecture of the GAN, we also train a multiwavelength GAN with HST cutouts in seven optical + near-infrared bands. This multiwavelength GAN improves the fraction of detected blends by another ∼10% compared to the single-band GAN. This is most beneficial to the current and future precision cosmology experiments (e.g., LSST, SPHEREx, Euclid, Roman), specifically those relying on weak gravitational lensing, where blending is a major source of systematic error.