This study investigates the effectiveness of using conditional generative adversarial networks (CGAN) to synthesize Optical Coherence Tomography (OCT) images for medical diagnosis. Specifically, the CGAN model is trained to generate images representing various eye conditions, including normal retina, vitreous warts (DRUSEN), choroidal neovascularization (CNV), and diabetic macular edema (DME), creating a dataset of 102,400 synthetic images per condition. The quality of these images is evaluated using two methods. First, 18 transfer-learning neural networks (including AlexNet, VGGNet16, GoogleNet) assess image quality through model-scoring metrics, resulting in an accuracy rate of 97.4% to 99.9% and an F1 Score of 95.3% to 100% across conditions. Second, interpretative analysis techniques (GRAD-CAM, occlusion sensitivity, LIME) compare the decision score distribution of real and synthetic images, further validating the CGAN network’s performance. The results indicate that CGAN-generated OCT images closely resemble real images and could significantly contribute to medical datasets.