The novel coronavirus disease (SARS‐CoV‐2 or COVID‐19) is spreading across the world and is affecting public health and the world economy. Artificial Intelligence (AI) can play a key role in enhancing COVID‐19 detection. However, lung infection by COVID‐19 is not quantifiable due to a lack of studies and the difficulty involved in the collection of large datasets. Segmentation is a preferred technique to quantify and contour the COVID‐19 region on the lungs using computed tomography (CT) scan images. To address the dataset problem, we propose a deep neural network (DNN) model trained on a limited dataset where features are selected using a region‐specific approach. Specifically, we apply the Zernike moment (ZM) and gray level co‐occurrence matrix (GLCM) to extract the unique shape and texture features. The feature vectors computed from these techniques enable segmentation that illustrates the severity of the COVID‐19 infection. The proposed algorithm was compared with other existing state‐of‐the‐art deep neural networks using the Radiopedia and COVID‐19 CT Segmentation datasets presented specificity, sensitivity, sensitivity, mean absolute error (MAE), enhance‐alignment measure (EM
φ
), and structure measure (
S
m
) of 0.942, 0.701, 0.082, 0.867, and 0.783, respectively. The metrics demonstrate the performance of the model in quantifying the COVID‐19 infection with limited datasets.
The mutants of novel coronavirus (COVID-19 or SARS-Cov-2) are spreading with different variants across the globe, affecting human health and the economy. Rapid detection and providing timely treatment for the COVID-19 infected is the greater challenge. For fast and cost-effective detection, artificial intelligence (AI) can perform a key role in enhancing chest X-ray images and classifying them as infected/non-infected. However, AI needs huge datasets to train and detect the COVID-19 infection, which may impact the overall system speed. Therefore, Deep Neural Network (DNN) is preferred over standard AI models to speed up the classification with a set of features from the datasets. Further, to have accurate feature extraction, an algorithm that combines Zernike Moment Feature (ZMF) and Gray Level Co-occurrence Matrix Feature (GF) is proposed and implemented. The proposed algorithm uses 36 Zernike Moment features with variance and contrast textures. This helps to detect the COVID-19 infection accurately. Finally, the Region Blocking (RB) approach with an optimum sub-image size (32 × 32) is employed to improve the processing speed up to 2.6 times per image. The performance of this implementation presents an accuracy (A) of 93.4%, sensitivity (Se) of 72.4%, specificity (Sp) of 95%, precision (Pr) of 74.9% and F1-score (F1) of 72.3%. These metrics illustrate that the proposed model can identify the COVID-19 infection with a lesser dataset and improved accuracy up to 1.3 times than state-of-the-art existing models.
Worldwide, more than 40k rice varieties are existing, each with different nutritional content and quality. Identifying these has to be consistent, automated, and accurate. Considering the feature extraction process, convolution Neural Networks (CNN) are preferred over machine learning (ML) for this classification. Transfer learning approaches help to optimize the CNN model; therefore, it fits in an FPGA. Seven different CNN models were proposed to classify five rice varieties, each model differs based on the: kernel depth; the number of convolution layers (CL); the number of fully connected layers (FCL); and the number of neurons per FCL. These were analyzed considering 70% and 30% for training and testing respectively. A dataset of 15,000 images/variety with each image of resolution. This results in an Optimized Lightweight Convolutional Neural Network (OpLW-CNN) model, having a CL, followed by two FCLs. This model is further analyzed using a random set of images: 500, 5000, and 75000 to fit the model optimally. This model achieves 99%, 98.13%, and 98.14% of specificity, F1-score, and accuracy for a set of 5000 images. These metrics are approximately 1% to 2% lesser than the performance of the benchmark model, and 81.5% fewer computations. Also, this model requires less than a second to classify an image.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.