Human vision depends heavily on retinal tissue. The loss of eyesight may result from infections of the retinal tissues that are treated slowly or do not work at all. Additionally, the diagnosis is susceptible to inaccuracies when a large dataset is involved. Therefore, a fully automated transfer learning approach for diagnosing diabetic retinopathy (DR) is suggested to minimize human intervention while maintaining high classification accuracy. To address this issue, we proposed a transfer learning-based trilateral attention network (TaNet) for the classification. To boost the visual quality of the DR pictures, a contrast constrained adaptive histogram equalization approach is applied. The preprocessed pictures are then segmented using a bilateral segmentation network (BiSeNet). The BiSeNet segmented the optic disc and blood vessels individually. After the completion of segmentation, the features are extracted. Feature extraction is based on the wavelet scattering transformation approach. The results of many trials were evaluated against the Messidor-2, EYEPACS, and APTOS 2019 datasets. The proposed model was created using a refined pre-trained technique and transfer learning methodology. Finally, the suggested framework was tested using efficiency assessment methods, and the classification rate was recorded as having above 98% sensitivity, specificity, precision, and accuracy. The proposed approach yields greater performance and shows enhancement towards the existing approach.