Investigations have been carried out for digital spectral and textural classi cation of an Indian urban environment using SPOT images with grey level co-occurrence matrix (GLCM), grey level diOE erence histogram (GLDH), and sum and diOE erence histogram (SADH) approaches. The results indicate that a combination of texture and spectral features signi cantly improves the classi cation accuracy compared with classi cation with pure spectral features only. This improvement is about 9% and 17% for an addition of one and two texture features, respectively. GLDH and SADH give statistically similar results to GLCM, and take less computing time than GLCM. Conventional separability measures like transformed divergence, Bhattacharya distance, etc. are not eOE ective in feature selection when classi cation is carried out with spectral and texture features. An alternative approach using simple statistics such as average coe cient of variation, skewness, and kurtosis and correlation amongst feature sets has shown greater feature selection potential when a combination of spectral and texture features is used.
IntroductionImproved spatial resolution does not always lead to better classi cation results for urban environments with conventional spectral classi cation (Toll and Kennard 1984 ). The classi cation accuracy is a function of two counteracting factors, which change as a function of the local environment. The rst factor is that ner spatial resolution results in an increase in the number of pure pixels and a decrease in the number of mixed pixels. This factor should increase the classi cation accuracy. On the other hand, the ner the spatial resolution, the larger the number of detectable sub-class elements. This implies high within-class spectral variance of classes corresponding to land cover units, which decreases their spectral separability and results in lower classi cation accuracy. As a result, classi cation accuracies may decrease for some environments, as spatial resolution becomes ner (Toll 1984, Latty et al. 1985, Gastellu-Etchegorry 1989, Gastellu-Etchegorry 1990. This problem is avoided to a certain extent by de ning training areas for new sub-classes and then by carrying out post-classi cation merging of sub-classes. Thus the overall eOE ect on classi cation depends not only on the spatial resolution of the image but also on the land cover type being classi ed (Cushine and Atkinson 1985).
Diabetic retinopathy (DR) is a serious retinal disease and is considered as a leading cause of blindness in the world. Ophthalmologists use optical coherence tomography (OCT) and fundus photography for the purpose of assessing the retinal thickness, and structure, in addition to detecting edema, hemorrhage, and scars. Deep learning models are mainly used to analyze OCT or fundus images, extract unique features for each stage of DR and therefore classify images and stage the disease. Throughout this paper, a deep Convolutional Neural Network (CNN) with 18 convolutional layers and 3 fully connected layers is proposed to analyze fundus images and automatically distinguish between controls (i.e. no DR), moderate DR (i.e. a combination of mild and moderate Non Proliferative DR (NPDR)) and severe DR (i.e. a group of severe NPDR, and Proliferative DR (PDR)) with a validation accuracy of 88%-89%, a sensitivity of 87%-89%, a specificity of 94%-95%, and a Quadratic Weighted Kappa Score of 0.91-0.92 when both 5-fold, and 10-fold cross validation methods were used respectively. A prior pre-processing stage was deployed where image resizing and a class-specific data augmentation were used. The proposed approach is considerably accurate in objectively diagnosing and grading diabetic retinopathy, which obviates the need for a retina specialist and expands access to retinal care. This technology enables both early diagnosis and objective tracking of disease progression which may help optimize medical therapy to minimize vision loss.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.