Purpose:To train a convolutional neural network (CNN) model from scratch to automatically detect tuberculosis (TB) from chest X-ray (CXR) images and compare its performance with transfer learning based technique of different pre-trained CNNs. Material and methods:We used two publicly available datasets of postero-anterior chest radiographs, which are from Montgomery County, Maryland, and Shenzhen, China. A CNN (ConvNet) from scratch was trained to automatically detect TB on chest radiographs. Also, a CNN-based transfer learning approach using five different pre-trained models, including Inception_v3, Xception, ResNet50, VGG19, and VGG16 was utilized for classifying TB and normal cases from CXR images. The performance of models for testing datasets was evaluated using five performances metrics, including accuracy, sensitivity/recall, precision, area under curve (AUC), and F1-score. Results:All proposed models provided an acceptable accuracy for two-class classification. Our proposed CNN architecture (i.e., ConvNet) achieved 88.0% precision, 87.0% sensitivity, 87.0% F1-score, 87.0% accuracy, and AUC of 87.0%, which was slightly less than the pre-trained models. Among all models, Exception, ResNet50, and VGG16 provided the highest classification performance of automated TB classification with precision, sensitivity, F1-score, and AUC of 91.0%, and 90.0% accuracy. Conclusions:Our study presents a transfer learning approach with deep CNNs to automatically classify TB and normal cases from the chest radiographs. The classification accuracy, precision, sensitivity, and F1-score for the detection of TB were found to be more than 87.0% for all models used in the study. Exception, ResNet50, and VGG16 models outperformed other deep CNN models for the datasets with image augmentation methods.
Multiple myeloma cancer is a type of blood cancer that happens when the growth of abnormal plasma cells becomes out of control in the bone marrow. There are various ways to diagnose multiple myeloma in bone marrow such as complete blood count test (CBC) or counting myeloma plasma cell in aspirate slide images using manual visualization or through image processing technique. In this work, an automatic deep learning method for the detection and segmentation of multiple myeloma plasma cell have been explored. To this end, a two-stage deep learning method is designed. In the first stage, the nucleus detection network is utilized to extract each instance of a cell of interest. The extracted instance is then fed to the multi-scale function to generate a multi-scale representation. The objective of the multi-scale function is to capture the shape variation and reduce the effect of object scale on the cytoplasm segmentation network. The generated scales are then fed into a pyramid of cytoplasm networks to learn the segmentation map in various scales. On top of the cytoplasm segmentation network, we included a scale aggregation function to refine and generate a final prediction. The proposed approach has been evaluated on the SegPC2021 grandchallenge and ranked second on the final test phase among all teams.
The essential step of successful brachytherapy would be precise applicator/needles trajectory detection, which is an open problem yet. This study proposes a two-phase deep learning-based method to automate the localization of high-dose-rate (HDR) prostate brachytherapy catheters through the patient's CT images. The whole process is divided into two phases using two different deep neural networks. First, brachytherapy needles segmentation is accomplished through a pix2pix Generative Adversarial Neural Network (pix2pix GAN). Second, a generic Object Tracking Using Regression Networks (GOTURN) was used to predict the needle trajectories. These models were trained and tested on a clinical prostate brachytherapy dataset. Among the total 25 patients, 5 patients that consisted of 592 slices was dedicated to testing sets, and the rest were used as train/validation set. The total number of needles in these slices of CT images was 8764, of which the employed pix2pix network is able to segment 98.72% (8652 of total). Dice Similarity Coefficient (DSC) and IoU (Intersection over Union) between the network output and the ground truth were 0.95 and 0.90, respectively. Moreover, the F1-score, Recall, and Precision results were 0.95, 0.93, and 0.97, respectively. Regarding the location of the shafts, the proposed model has an error of 0.41 mm. The current study proposed a novel methodology to automatically localize and reconstruct the prostate HDR brachytherapy interstitial needles through the 3D CT images. The presented method can be utilized as a computer-aided module in clinical applications to automatically detect and delineate the multi-catheters, potentially enhancing the treatment quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.