Objectives: To propose a novel method to enhance glaucoma identification by leveraging Digital Fundus Images (DFI). The deep learning-based approach, along with feature detection techniques, is utilized to discover the built-in features of the DFI in an unsupervised manner to enable robust detection with high accuracy. Methods: The Enhanced Auto Encoder Networks (DL-EAEN) approach is used to evaluate the latent representations from DFI and identify the morphological changes associated with glaucoma for prompt identification and classification. The fundus images are utilized for optic disc localization and glaucoma detection, and the Scale-Invariant Feature Transform (SIFT) approach is used to identify the local features as well as significant spots in the images. The PAPILA retinal dataset is used for this study with a record of 244 patients, which includes 488 fundus images of the left and right eye of all the patients in the M & F category with clinical results of healthy, glaucoma, suspect, and eye with crystalline and with IOL. In order to construct pixel-level masks and define the outer edges of the optic cup, U-Net and Mask-R-CNN techniques are employed as an image segmentation process. To measure the performance of DL-EAEN, MATLAB software is used and compared against existing models such as SVM, Adaboost, and CNN-Softmax classifiers. Findings: The proposed deep learning based enhanced AEN method outperforms the prevailing methods of SVM, Adaboost, and CNN-Softmax classifiers with promising results of 95.6% accuracy, 0.8 dice-score, 96.2% sensitivity, 97.01% specificity, 97.08% F-score, 97.41% precision, 98.02% recall, and AUC-ROC with 0.89 TPR & 0.16 FPR. Novelty: The evident results of DL-EAEN shows accurate and consistent rate in glaucoma detection and classification, which helps ophthalmologists, make easy diagnosis. In terms of accuracy, dice score, AUC-ROC, sensitivity, specificity, precision and F-score, the DL-EAEN overcomes the limitations of existing models SVM, Adaboost, and CNN-Softmax classifiers.