Synthesizing images from a given text description involves engaging two types of information: the content, which includes information explicitly described in the text (e.g., color, composition, etc.), and the style, which is usually not well described in the text (e.g., location, quantity, size, etc.). However, in previous works, it is typically treated as a process of generating images only from the content, i.e., without considering learning meaningful style representations. In this paper, we aim to learn two variables that are disentangled in the latent space, representing content and style respectively. We achieve this by augmenting current text-to-image synthesis frameworks with a dual adversarial inference mechanism. Through extensive experiments, we show that our model learns, in an unsupervised manner, style representations corresponding to certain meaningful information present in the image that are not well described in the text. The new framework also improves the quality of synthesized images when evaluated on Oxford-102, CUB and COCO datasets.
Background: This study set out to develop a computed tomography (CT)-based wavelet transforming radiomics approach for grading pulmonary lesions caused by COVID-19 and to validate it using real-world data.Methods: This retrospective study analyzed 111 patients with 187 pulmonary lesions from 16 hospitals; all patients had confirmed COVID-19 and underwent non-contrast chest CT. Data were divided into a training cohort (72 patients with 127 lesions from nine hospitals) and an independent test cohort (39 patients with 60 lesions from seven hospitals) according to the hospital in which the CT was performed. In all, 73 texture features were extracted from manually delineated lesion volumes, and 23 three-dimensional (3D) wavelets with eight decomposition modes were implemented to compare and validate the value of wavelet transformation for grade assessment. Finally, the optimal machine learning pipeline, valuable radiomic features, and final radiomic models were determined. The area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve were used to determine the diagnostic performance and clinical utility of the models.Results: Of the 187 lesions, 108 (57.75%) were diagnosed as mild lesions and 79 (42.25%) as moderate/ severe lesions. All selected radiomic features showed significant correlations with the grade of COVID-19 pulmonary lesions (P<0.05). Biorthogonal 1.1 (bior1.1) LLL was determined as the optimal wavelet transform mode. The wavelet transforming radiomic model had an AUC of 0.910 in the test cohort, outperforming the original radiomic model (AUC =0.880; P<0.05). Decision analysis showed the radiomic model could add a net benefit at any given threshold of probability.Conclusions: Wavelet transformation can enhance CT texture features. Wavelet transforming radiomics based on CT images can be used to effectively assess the grade of pulmonary lesions caused by COVID-19, ^ ORCID: Zekun Jiang,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.