Background: Coronavirus disease 2019 (COVID-19) is an emerging infectious disease and global health crisis. Although real-time reverse transcription polymerase chain reaction (RT-PCR) is known as the most widely laboratory method to detect the COVID-19 from respiratory specimens. It suffers from several main drawbacks such as time-consuming, high false-negative results, and limited availability. Therefore, the automatically detect of COVID-19 will be required. Objective: This study aimed to use an automated deep convolution neural network based pre-trained transfer models for detection of COVID-19 infection in chest X-rays. Material and Methods: In a retrospective study, we have applied Visual Geometry Group (VGG)-16, VGG-19, MobileNet, and InceptionResNetV2 pre-trained models for detection COVID-19 infection from 348 chest X-ray images. Results: Our proposed models have been trained and tested on a dataset which previously prepared. The all proposed models provide accuracy greater than 90.0%. The pre-trained MobileNet model provides the highest classification performance of automated COVID-19 classification with 99.1% accuracy in comparison with other three proposed models. The plotted area under curve (AUC) of receiver operating characteristics (ROC) of VGG16, VGG19, MobileNet, and InceptionResNetV2 models are 0.92, 0.91, 0.99, and 0.97, respectively. Conclusion: The all proposed models were able to perform binary classification with the accuracy more than 90.0% for COVID-19 diagnosis. Our data indicated that the MobileNet can be considered as a promising model to detect COVID-19 cases. In the future, by increasing the number of samples of COVID-19 chest X-rays to the training dataset, the accuracy and robustness of our proposed models increase further.
Objective: Pneumonia is a lung infection and causes the inflammation of the small air sacs (Alveoli) in one or both lungs. Proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care. Currently, chest X-ray is considered as the best imaging modality for diagnosing pneumonia. However, the interpretation of chest X-ray images is challenging. To this end, we aimed to use an automated convolutional neural network-based transfer-learning approach to detect pneumonia in paediatric chest radiographs. Methods: Herein, an automated convolutional neural network-based transfer-learning approach using four different pre-trained models (i.e. VGG19, DenseNet121, Xception, and ResNet50) was applied to detect pneumonia in children (1–5 years) chest X-ray images. The performance of different proposed models for testing data set was evaluated using five performances metrics, including accuracy, sensitivity/recall, Precision, area under curve, and F1 score. Results: All proposed models provide accuracy greater than 83.0% for binary classification. The pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. The sensitivity of the proposed models was greater than 91.0%. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0%. The plotted area under curve of receiver operating characteristics of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively. Conclusion: Our data showed that the proposed models achieve a high accuracy for binary classification. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that these proposed models can help radiologists for a quick diagnosis of pneumonia at radiology departments. Moreover, our proposed models may be useful to detect other chest-related diseases such as novel Coronavirus 2019. Advances in knowledge: Herein, we used transfer learning as a machine learning approach to accelerate training of the proposed models and resolve the problem associated with insufficient data. Our proposed models achieved accuracy greater than 83.0% for binary classification.
PurposeThis study was designed to assess the dose accumulation (DA) of bladder and rectum between brachytherapy fractions using hybrid-based deformable image registration (DIR) and compare it with the simple summation (SS) approach of GEC-ESTRO in cervical cancer patients.Material and methodsPatients (n = 137) with cervical cancer treated with 3D conformal radiotherapy and three fractions of high-dose-rate brachytherapy were selected. CT images were acquired to delineate organs at risk and targets according to GEC-ESTRO recommendations. In order to determine the DA for the bladder and rectum, hybrid-based DIR was done for three different fractions of brachytherapy and the results were compared with the standard GEC-ESTRO method. Also, we performed a phantom study to calculate the uncertainty of the hybrid-based DIR algorithm for contour matching and dose mapping.ResultsThe mean ± standard deviation (SD) of the Dice similarity coefficient (DICE), Jaccard, Hausdorff distance (HD) and mean distance to agreement (MDA) in the DIR process were 0.94 ±0.02, 0.89 ±0.03, 8.44 ±3.56 and 0.72 ±0.22 for bladder and 0.89 ±0.05, 0.80 ±0.07, 15.46 ±10.14 and 1.19 ±0.59 for rectum, respectively. The median (Q1, Q3; maximum) GyEQD2 differences of total D2cc between DIR-based and SS methods for the bladder and rectum were reduced by –1.53 (–0.86, –2.98; –9.17) and –1.38 (–0.80, –2.14; –7.11), respectively. The mean ± SD of DICE, Jaccard, HD, and MDA for contour matching were 0.98 ±0.008, 0.97 ±0.01, 2.00 ±0.70 and 0.20 ±0.04, respectively for large deformation. Maximum uncertainty of dose mapping was about 3.58%.ConclusionsThe hybrid-based DIR algorithm demonstrated low registration uncertainty for both contour matching and dose mapping. The DA difference between DIR-based and SS approaches was statistically significant for both bladder and rectum and hybrid-based DIR showed potential to assess DA between brachytherapy fractions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.