The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.
Segmentation of the retinal blood vessels using filtering techniques is a widely used step in the development of an automated system for diagnostic retinal image analysis. This paper optimized the blood vessel segmentation, by extending the trainable B-COSFIRE filter via identification of more optimal parameters. The filter parameters are introduced using an optimization procedure to three public datasets (STARE, DRIVE, and CHASE-DB1). The suggested approach considers analyzing thresholding parameters selection followed by application of background artifacts removal techniques. The approach results are better than the other state of the art methods used for vessel segmentation. ANOVA analysis technique is also used to identify the most significant parameters that are impacting the performance results (p-value ¡ 0.05). The proposed enhancement has improved the vessel segmentation accuracy in DRIVE, STARE and CHASE-DB1 to 95.47, 95.30 and 95.30, respectively.
Several approaches have been proposed to detect any malicious manipulation caused by electricity fraudsters. Some of the significant approaches are Machine Learning algorithms and data-based methods that have shown advantages compared to the traditional methods, and they are becoming predominant in recent years. In this study, a novel method is introduced to detect the fraudulent NTL loss in the smart grids in a two-stage detection process. In the first stage, the time-series readings are enriched by adding a new set of extracted features from the detection of sudden Jump patterns in the electricity consumption and the Autoregressive Integrated moving average (ARIMA). In the second stage, the distributed random forest (DRF) generates the learned model. The proposed model is applied to the public SGCC dataset, and the approach results have reported 98% accuracy and F1-score. Such results outperform the other recently reported state-of-the-art methods for NTL detection that are applied to the same SGCC dataset.
Hypertensive retinopathy severity classification is proportionally related to tortuosity severity grading. No tortuosity severity scale enables a computer-aided system to classify the tortuosity severity of a retinal image. This work aimed to introduce a machine learning model that can identify the severity of a retinal image automatically and hence contribute to developing a hypertensive retinopathy or diabetic retinopathy automated grading system. First, the tortuosity is quantified using fourteen tortuosity measurement formulas for the retinal images of the AV-Classification dataset to create the tortuosity feature set. Secondly, a manual labeling is performed and reviewed by two ophthalmologists to construct a tortuosity severity ground truth grading for each image in the AV classification dataset. Finally, the feature set is used to train and validate the machine learning models (J48 decision tree, ensemble rotation forest, and distributed random forest). The best performance learned model is used as the tortuosity severity classifier to identify the tortuosity severity (normal, mild, moderate, and severe) for any given retinal image. The distributed random forest model has reported the highest accuracy (99.4%) compared to the J48 Decision tree model and the rotation forest model with minimal least root mean square error (0.0000192) and the least mean average error (0.0000182). The proposed tortuosity severity grading matched the ophthalmologist’s judgment. Moreover, detecting the tortuosity severity of the retinal vessels’, optimizing vessel segmentation, the vessel segment extraction, and the created feature set have increased the accuracy of the automatic tortuosity severity detection model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.