Because of the rapid spread and wide range of the clinical manifestations of the coronavirus disease 2019 (COVID-19), fast and accurate estimation of the disease progression and mortality is vital for the management of the patients. Currently available image-based prognostic predictors for patients with COVID-19 are largely limited to semi-automated schemes with manually designed features and supervised learning, and the survival analysis is largely limited to logistic regression. We developed a weakly unsupervised conditional generative adversarial network, called pix2surv, which can be trained to estimate the time-to-event information for survival analysis directly from the chest computed tomography (CT) images of a patient. We show that the performance of pix2surv based on CT images significantly outperforms those of existing laboratory tests and image-based visual and quantitative predictors in estimating the disease progression and mortality of COVID-19 patients. Thus, pix2surv is a promising approach for performing image-based prognostic predictions.
Purpose
Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography.
Methods
We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet.
Results
The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods.
Conclusion
The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.
In computer-aided diagnosis systems for lung cancer, segmentation of lung nodules is important for analyzing image features of lung nodules on computed tomography (CT) images and distinguishing malignant nodules from benign ones. However, it is difficult to accurately and robustly segment lung nodules attached to the chest wall or with ground-glass opacities using conventional image processing methods. Therefore, this study aimed to develop a method for robust and accurate three-dimensional (3D) segmentation of lung nodule regions using deep learning. In this study, a nested 3D fully connected convolutional network with residual unit structures was proposed, and designed a new loss function. Compared with annotated images obtained under the guidance of a radiologist, the Dice similarity coefficient (DS) and intersection over union (IoU) were 0.845 ± 0.008 and 0.738 ± 0.011, respectively, for 332 lung nodules (lung adenocarcinoma) obtained from 332 patients. On the other hand, for 3D U-Net and 3D SegNet, the DS was 0.822 ± 0.009 and 0.786 ± 0.011, respectively, and the IoU was 0.711 ± 0.011 and 0.660 ± 0.012, respectively. These results indicate that the proposed method is significantly superior to well-known deep learning models. Moreover, we compared the results obtained from the proposed method with those obtained from conventional image processing methods, watersheds, and graph cuts. The DS and IoU results for the watershed method were 0.628 ± 0.027 and 0.494 ± 0.025, respectively, and those for the graph cut method were 0.566 ± 0.025 and 0.414 ± 0.021, respectively. These results indicate that the proposed method is significantly superior to conventional image processing methods. The proposed method may be useful for accurate and robust segmentation of lung nodules to assist radiologists in the diagnosis of lung nodules such as lung adenocarcinoma on CT images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.