Diabetes problems can lead to an eye disease called Diabetic Retinopathy (DR), which permanently damages the blood vessels in the retina. If not treated early, DR becomes a significant reason for blindness. To identify the DR and determine the stages, medical tests are very labor-intensive, expensive, and timeconsuming. To address the issue, a hybrid deep and machine learning techniquebased autonomous diagnostic system is provided in this paper. Our proposal is based on lesion segmentation of the fundus images based on the LuNet network. Then a Refined Attention Pyramid Network (RAPNet) is used for extracting global and local features. To increase the performance of the classifier, the unique features are selected from the extracted feature set using Aquila Optimizer (AO) algorithm. Finally, the LightGBM model is applied to classify the input image based on the severity. Several investigations have been done to analyze the performance of the proposed framework on three publically available datasets (MESSIDOR, APTOS, and IDRiD) using several performance metrics such as accuracy, precision, recall, and f1-score. The proposed classifier achieves 99.29%, 99.35%, and 99.31% accuracy for these three datasets respectively. The outcomes of the experiments demonstrate that the suggested technique is effective for disease identification and reliable DR grading.