In integrated circuit manufacturing, defects in epoxy drops for die attachments are required to be identified during production. Modern identification techniques based on vision-based deep neural networks require the availability of a very large number of defect and non-defect epoxy drop images. In practice, however, very few defective epoxy drop images are available. This paper presents a generative adversarial network solution to generate synthesized defective epoxy drop images as a data augmentation approach so that vision-based deep neural networks can be trained or tested using such images. More specifically, the so-called CycleGAN variation of the generative adversarial network is used by enhancing its cycle consistency loss function with two other loss functions consisting of learned perceptual image patch similarity (LPIPS) and a structural similarity index metric (SSIM). The results obtained indicate that when using the enhanced loss function, the quality of synthesized defective epoxy drop images are improved by 59%, 12%, and 131% for the metrics of the peak signal-to-noise ratio (PSNR), universal image quality index (UQI), and visual information fidelity (VIF), respectively, compared to the CycleGAN standard loss function. A typical image classifier is used to show the improvement in the identification outcome when using the synthesized images generated by the developed data augmentation approach.