Frontal view gait recognition for people identification has been carried out using single RGB, stereo RGB, Kinect 1.0 and Doppler radar. However, existing methods based on these camera technologies suffer from several problems. Therefore, we propose a four-part method for frontal view gait recognition based on fusion of multiple features acquired from a Time of Flight (ToF) camera. We have developed a gait data set captured by a ToF camera. The data set includes two sessions recorded seven months apart, with 46 and 33 subjects respectively, each with six walks with five covariates. The four-part method includes: a new human silhouette extraction algorithm that reduces the multiple reflection problem experienced by ToF cameras; a frame selection method based on a new gait cycle detection algorithm; four new gait image representations; and a novel fusion classifier. Rigorous experiments are carried out to compare the proposed method with state-of-the-art methods. The results show distinct improvements over recognition rates for all covariates. The proposed method outperforms all major existing approaches for all covariates and results in 66.1% and 81.0% Rank 1 and Rank 5 recognition rates respectively in overall covariates, compared with a best state-of-the-art method performance of 35.7% and 57.7%. Index Terms-Gait recognition, frontal view, Time of Flight camera, fusion of features, depth gait data set.
Digital image processing is one of the most widely used computer vision techniques, especially in biomedical engineering. Modern ophthalmology is directly dependent on this robust technology, digital image processing to find out the biomarkers analyzing the fundus eye images that are responsible for different kinds of life-threatening diseases like hypertensive retinopathy, Transient Ischemic Attack or sharp stroke. The geometric features like vessel tortuosity, branching angles, vessel diameter, and fractal dimension are considered as the biomarkers for the abovementioned cardiovascular diseases. Retinal vessel diameter widening has found as the early symptom of transient ischemic attack or sharp stroke. In this paper, a completely new and computer-aided automated method to measure the retinal vessel diameter by employing the Euclidean Distance Transform technique was developed. The proposed algorithm measures the Euclidean Distance of the bright pixels exist on the Region of Interest (ROI). Further, the Vascular Disease Image Set (VDIS) and Central Light Reflex Image Set (CLRIS) of Retinal Vessel Image Set for Estimation of Width database were used to evaluate the performance of the proposed algorithm that measures the vessel diameter. The proposed algorithm obtained 98.1% accuracy for the CLRIS and 97.7% accuracy for VDIS. With further evaluation, validation and enhancement of the method, it can be integrated into the clinical computer-aided diagnostic tool.
Plants have assumed a significant role in the history of humankind, for the most part as a source of nourishment for human and animals. However, plants typically powerless to different sort of diseases such as leaf blight, gray spot and rust. It will cause a great loss to farmers and ranchers. Therefore, an appropriate method to estimate the severity of diseases in plant leaf is needed to overcome the problem. This paper presents the fusions of the Fuzzy C-Means segmentation method with four different colour spaces namely RGB, HSV, L*a*b and YCbCr to estimate plant leaf disease severity. The percentage of performance of proposed algorithms are recorded and compared with the previous method which are K-Means and Otsu’s thresholding. The best severity estimation algorithm and colour space used to estimate the diseases severity of plant leaf is the combination of Fuzzy C-Means and YCbCr color space. The average performance of Fuzzy C-Means is 91.08% while the average performance of YCbCr is 83.74%. Combination of Fuzzy C-Means and YCbCr produce 96.81% accuracy. This algorithm is more effective than other algorithms in terms of not only better segmentation performance but also low time complexity that is 34.75s in average with 0.2697s standard deviation.
Retinal blood vessel segmentation is crucial as it is the earliest process in measuring various indicators of retinopathy sign such as arterial-venous nicking, and focal arteriolar and generalized arteriolar narrowing. The segmentation can be clinically used if its accuracy is close to 100%. In this study, a new method of segmentation is developed for extraction of retinal blood vessel. In this paper, we present a new automated method to extract blood vessels in retinal fundus images. The proposed method comprises of two main parts and a few subcomponents which include pre-processing and segmentation. The main focus for the segmentation part is two morphological reconstructions which are the morphological reconstructions followed by the morphological top-hat transform. Then the technique to classify the vessel pixels and background pixels is Otsu’s Thresholding. The image database used in this study is the High Resolution Fundus Image Database (HRFID). The developed segmentation method accuracies are 95.17%, 92.06% and 94.71% when tested on dataset of healthy, diabetic retinopathy (DR) and glaucoma patients respectively. Overall, the performance of the proposed method is comparable with existing methods with overall accuracies were more than 90 % for all three different categories: healthy, DR and glaucoma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.