Early detection and proper screening are essential to prevent vision loss due to glaucoma. In recent years, convolutional neural network (CNN) has been successfully applied to the color fundus images for the automatic detection of glaucoma. Compared to the existing automatic screening methods, CNNs have the ability to extract the distinct features directly from the fundus images. In this paper, a deep learning architecture based on a CNN is designed for the classification of glaucomatous and normal fundus images. An 18–layer CNN is designed and trained to extract the discriminative features from the fundus image. It comprises of four convolutional layers, two max pooling layers, and one fully connected layer. A two–stage tuning approach is proposed for the selection of suitable batch size and initial learning rate. The proposed network is tested on DRISHTI–GS1, ORIGA, RIM–ONE2 (release 2), ACRIMA, and large–scale attention–based glaucoma (LAG) databases. Rotation data augmentation technique is employed to enlarge the dataset. Randomly selected 70% of images are used for training the model and remaining 30% images are used for testing. An overall accuracy of 86.62%, 85.97%, 78.32%, 94.43%, and 96.64% are obtained on DRISHTI–GS1, RIM–ONE2, ORIGA, LAG, and ACRIMA databases, respectively. The proposed method has achieved an accuracy, sensitivity, specificity, and precision of 96.64%, 96.07%, 97.39%, and 97.74%, respectively, for ACRIMA database. Compared to other existing architectures, the proposed method is robust to Gaussian noise and salt–and–pepper noise.
Glaucomatous optic neuropathy is the preeminent cause of incurable vision impairment and blindness across the world. Manual interpretation of the pathological structures in fundus images is time‐consuming and requires the expertise of a competent specialist. With the development of deep learning approaches, automated glaucoma diagnosis is easy and effective for larger screening. Convolutional neural networks, in particular, have emerged as a promising choice for glaucoma detection from fundus images due to their remarkable success in image classification. Transferring the optimized weights from a pre‐trained model expedites and simplifies the training process of deep neural network. In this paper, a deep ensemble model using the stacking ensemble learning technique is developed to attain the optimum performance for the classification of glaucomatous and normal images. Thirteen pre‐trained models such as Alexnet, Googlenet, VGG‐16, VGG‐19, Squeezenet, Resnet‐18, Resnet‐50, Resnet‐101, Efficientnet‐b0, Mobilenet‐v2, Densenet‐201, Inception‐v3, and Xception are implemented. Their performance is compared in 65 different configurations, comprising 13 CNN architectures and five various classification approaches. A two‐stage ensemble selection technique is proposed to select the optimal configurations. Selected configurations are pooled using a probability averaging technique. The final classification is performed using an SVM classifier. In this work, publicly available databases are modified (such as: DRISHTI‐GS1‐R, ORIGA‐R, RIM‐ONE2‐R, LAG‐R, and ACRIMA‐R) based on oversampling data‐level technique for validating the performance of deep ensemble model. Ensembling the best configurations reports an overall classification accuracy of 93.4%, 79.6%, 91.3%, 99.5%, and 99.6% in DRISHTI‐GS1‐R, ORIGA‐R, RIM‐ONE2‐R, ACRIMA‐R, and LAG‐R databases, respectively.
Accurate diagnosis of plasmodium parasite from blood cell images is essential to prevent the further spreading of the deadliest disease, malaria. It is an infectious disease, mainly transmitted by female Anopheles bite. Conventionally, microscopists can diagnose this disease by examining the thick and thin blood smears. Due to inter/intraobserver errors, the classification accuracy may get affected. To overcome this, a robust and shallow convolutional neural network is developed for the automatic detection of malaria parasite using thin blood smear images. The network is trained with 80% of images (11,023 parasitized and 11,023 uninfected) and tested with 20% (2756 parasitized and 2756 uninfected) of images. Several standard pre-trained models like Alexnet, VGG-16, VGG-19, Resnet-18, Resnet-50, Resnet-101, Squeezenet, Mobilenet-v2, Inception-v3, Googlenet, Xception, and Densenet-201 are implemented and the results obtained are compared with the proposed method. Classification accuracy, sensitivity, specificity, positive predictive value, and F1-score are the metrics used to evaluate the performance of the networks. Compared to existing pre-trained models, the proposed CNN has achieved better results with classification accuracy of 97.8%, sensitivity of 97.9%, specificity of 97.8%, positive predictive value of 97.8%, and F1-score of 97.84%. The proposed method performs the training much faster compared to the pre-trained network due to less number of parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.