Objective: Pneumonia is a lung infection and causes the inflammation of the small air sacs (Alveoli) in one or both lungs. Proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care. Currently, chest X-ray is considered as the best imaging modality for diagnosing pneumonia. However, the interpretation of chest X-ray images is challenging. To this end, we aimed to use an automated convolutional neural network-based transfer-learning approach to detect pneumonia in paediatric chest radiographs. Methods: Herein, an automated convolutional neural network-based transfer-learning approach using four different pre-trained models (i.e. VGG19, DenseNet121, Xception, and ResNet50) was applied to detect pneumonia in children (1–5 years) chest X-ray images. The performance of different proposed models for testing data set was evaluated using five performances metrics, including accuracy, sensitivity/recall, Precision, area under curve, and F1 score. Results: All proposed models provide accuracy greater than 83.0% for binary classification. The pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. The sensitivity of the proposed models was greater than 91.0%. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0%. The plotted area under curve of receiver operating characteristics of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively. Conclusion: Our data showed that the proposed models achieve a high accuracy for binary classification. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that these proposed models can help radiologists for a quick diagnosis of pneumonia at radiology departments. Moreover, our proposed models may be useful to detect other chest-related diseases such as novel Coronavirus 2019. Advances in knowledge: Herein, we used transfer learning as a machine learning approach to accelerate training of the proposed models and resolve the problem associated with insufficient data. Our proposed models achieved accuracy greater than 83.0% for binary classification.
Purpose:To train a convolutional neural network (CNN) model from scratch to automatically detect tuberculosis (TB) from chest X-ray (CXR) images and compare its performance with transfer learning based technique of different pre-trained CNNs. Material and methods:We used two publicly available datasets of postero-anterior chest radiographs, which are from Montgomery County, Maryland, and Shenzhen, China. A CNN (ConvNet) from scratch was trained to automatically detect TB on chest radiographs. Also, a CNN-based transfer learning approach using five different pre-trained models, including Inception_v3, Xception, ResNet50, VGG19, and VGG16 was utilized for classifying TB and normal cases from CXR images. The performance of models for testing datasets was evaluated using five performances metrics, including accuracy, sensitivity/recall, precision, area under curve (AUC), and F1-score. Results:All proposed models provided an acceptable accuracy for two-class classification. Our proposed CNN architecture (i.e., ConvNet) achieved 88.0% precision, 87.0% sensitivity, 87.0% F1-score, 87.0% accuracy, and AUC of 87.0%, which was slightly less than the pre-trained models. Among all models, Exception, ResNet50, and VGG16 provided the highest classification performance of automated TB classification with precision, sensitivity, F1-score, and AUC of 91.0%, and 90.0% accuracy. Conclusions:Our study presents a transfer learning approach with deep CNNs to automatically classify TB and normal cases from the chest radiographs. The classification accuracy, precision, sensitivity, and F1-score for the detection of TB were found to be more than 87.0% for all models used in the study. Exception, ResNet50, and VGG16 models outperformed other deep CNN models for the datasets with image augmentation methods.
BACKGROUND: Work-related musculoskeletal disorders are the most common occupational health hazards. In the flour production industry, the fast pace of work, high frequency of repetitive movements, manual handling of loads, and awkward postures put a lot of pressure on the worker’s body. OBJECTIVE: Given the high exposure of the workers of the flour production industry to ergonomic risk factors, this study aimed to reduce the rate of musculoskeletal disorders among a group of flour factory workers through ergonomic interventions. MATERIALS AND METHODS: This interventional study was performed using the census method on the eligible workers of a flour factory. An ergonomic intervention program was planned and implemented with the goal of reducing musculoskeletal disorders. The effectiveness of the program was evaluated by measuring the prevalence of musculoskeletal disorders before and six months after the interventions. RESULTS: Before the intervention, musculoskeletal disorders were most prevalent in the lower back, arms, shoulders, legs, thighs, knees, neck and wrists, respectively. Evaluation of the prevalence of musculoskeletal disorders after the intervention showed the positive effect of the ergonomic intervention program on musculoskeletal disorders in the neck, shoulders, lower back, thighs, knees, and legs (P < 0.05). CONCLUSION: Engineering and management interventions implemented in this study led to a significant reduction in the level of ergonomic risk factors and a reduced rate of musculoskeletal disorders among workers of different units in the flour factory.
In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.