COVID-19 is a disease that can be spread easily with minimal physical contact. Currently, the World Health Organization (WHO) has endorsed the reverse transcription-polymerase chain reaction swab test as a diagnostic tool to confirm COVID-19 cases. This test requires at least a day for the results to come out depending on the available facilities. Many countries have adopted a targeted approach in screening potential patients due to the cost. However, there is a need for a fast and accurate screening test to complement this targeted approach, so that the potential virus carriers can be quarantined as early as possible. The X-ray is a good screening modality; it is quick at capturing, cheap, and widely available, even in third world countries. Therefore, a deep learning approach has been proposed to automate the screening process by introducing LightCovidNet, a lightweight deep learning model that is suitable for the mobile platform. It is important to have a lightweight model so that it can be used all over the world even on a standard mobile phone. The model has been trained with additional synthetic data that were generated from the conditional deep convolutional generative adversarial network. LightCovidNet consists of three components, which are entry, middle, and exit flows. The middle flow comprises five units of feed-forward convolutional neural networks that are built using separable convolution operators. The exit flow is designed to improve the multi-scale capability of the network through a simplified spatial pyramid pooling module. It is a symmetrical architecture with three parallel pooling branches that enable the network to learn multi-scale features, which is suitable for cases wherein the X-ray images were captured from all over the world independently. Besides, the usage of separable convolution has managed to reduce the memory usage without affecting the classification accuracy. The proposed method managed to get the best mean accuracy of 0.9697 with a low memory requirement of just 841,771 parameters. Moreover, the symmetrical spatial pyramid pooling module is the most crucial component; the absence of this module will reduce the screening accuracy to just 0.9237. Hence, the developed model is suitable to be implemented for mass COVID-19 screening.
Pterygium is an eye condition that is prevalent among workers that are frequently exposed to sunlight radiation. However, most of them are not aware of this condition, which motivates many volunteers to set up health awareness booths to give them free health screening. As a result, a screening tool that can be operated on various platforms is needed to support the automated pterygium assessment. One of the crucial functions of this assessment is to extract the infected regions, which directly correlates with the severity levels. Hence, Group-PPM-Net is proposed by integrating a spatial pyramid pooling module (PPM) and group convolution to the deep learning segmentation network. The system uses a standard mobile phone camera input, which is then fed to a modified encoder-decoder convolutional neural network, inspired by a Fully Convolutional Dense Network that consists of a total of 11 dense blocks. A PPM is integrated into the network because of its multi-scale capability, which is useful for multi-scale tissue extraction. The shape of the tissues remains relatively constant, but the size will differ according to the severity levels. Moreover, group and shuffle convolution modules are also integrated at the decoder side of Group-PPM-Net by placing them at the starting layer of each dense block. The addition of these modules allows better correlation among the filters in each group, while the shuffle process increases channel variation that the filters can learn from. The results show that the proposed method obtains mean accuracy, mean intersection over union, Hausdorff distance, and Jaccard index performances of 0.9330, 0.8640, 11.5474, and 0.7966, respectively.
COVID-19 is a contagious disease that has caused more than 230,000 deaths worldwide at the end of April 2020. Within a span of just a few months, it has infected more than 4 million peoples across the globe due to its high transmittance rate. Thus, many governments have tried their best to increase the diagnostic capability of their hospitals so that the disease can be identified as early as possible. However, in most cases, the results only come back after a day or two, which directly increases the possibility of disease spreadness because of the delayed diagnosis. Therefore, a fast screening method using existing tools such as x-ray and computerized tomography scans can help alleviate the burden of mass diagnosis tests. A chest x-ray is one of the best modalities in diagnosing a pneumonia symptom, which is the primary symptom for COVID-19. Hence, this paper proposes a lightweight deep learning model to screen the possibility of COVID-19 accurately. A lightweight model is important, as such it allows the model to be deployed on various platforms that include mobile phones, tablets, and normal computers without worrying about the memory storage capacity. The proposed model is based on 14 layers of convolutional neural network with a modified spatial pyramid pooling module. The multiscale ability of the proposed network allows it to identify the COVID-19 disease for various severity levels. According to the performance results, the proposed SPP-COVID-Net achieves the best mean accuracy of 0.946 with the lowest standard deviation among the training folds accuracy. It comprises of just 862,331 total number of parameters, which uses less than 4 MegaBytes memory storage. The model is suitable to be implemented for fast screening purposes so that better-targeted diagnoses can be performed to optimize the test time and cost.
Squat exercise is frequently used in physiotherapy rehabilitation for stroke patients. In the early stage of rehabilitation, patients are urged to avoid performing any deep squat as the strains on tendon and ligament are much higher compared to the half-squat exercise. Therefore, it is important for patients to be aware of their squat depth. One of the ways to measure squat depth is by using a wearable device which adds unnecessary weight to the patients and makes them feel uncomfortable. Thus, we propose a single camera system that captures video from the frontal view to measure the squat angle continuously according to the number of frames per second. The system will provide knee angle measurements for every frame taken based on a combined approach of deep learning tracking and deep belief networks regressor. The proposed system requires just a bounding box input of the whole test subject taken at an upright position, which will later be the input to a convolutional neural networks-based tracker. Both the head and upper body parts of the exerciser will be tracked independently. The resultant tracked points will be normalized with the test subject height to find the ratio of height to the corresponding points. The ratio features are then will be the input to multiple deep belief networks' regressors to predict the knee angle. The mean of ratio features will be used to segregate the input frame into its respective regressor. The experimental results show that the system produces the lowest mean error angle of 8.64 • based on the setup of five regressors with each of them consists of five hidden layers. Hence, it is suitable to be implemented in a squat angle monitoring system to notify the patients of their squat angle depth.INDEX TERMS Squat angle analysis, physiotherapy monitoring, visual object tracking, deep belief networks regressor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.