BackgroundGerm cell tumors (GCTs) are neoplasms derived from reproductive cells, mostly occurring in children and adolescents at 10 to 19 years of age. Intracranial GCTs are classified histologically into germinomas and non-germinomatous germ cell tumors. Germinomas of the basal ganglia are difficult to distinguish based on symptoms or routine MRI images from gliomas, even for experienced neurosurgeons or radiologists. Meanwhile, intracranial germinoma has a lower incidence rate than glioma in children and adults. Therefore, we established a model based on pre-trained ResNet18 with transfer learning to better identify germinomas of the basal ganglia.MethodsThis retrospective study enrolled 73 patients diagnosed with germinoma or glioma of the basal ganglia. Brain lesions were manually segmented based on both T1C and T2 FLAIR sequences. The T1C sequence was used to build the tumor classification model. A 2D convolutional architecture and transfer learning were implemented. ResNet18 from ImageNet was retrained on the MRI images of our cohort. Class activation mapping was applied for the model visualization.ResultsThe model was trained using five-fold cross-validation, achieving a mean AUC of 0.88. By analyzing the class activation map, we found that the model’s attention was focused on the peri-tumoral edema region of gliomas and tumor bulk for germinomas, indicating that differences in these regions may help discriminate these tumors.ConclusionsThis study showed that the T1C-based transfer learning model could accurately distinguish germinomas from gliomas of the basal ganglia preoperatively.
BackgroundIntracranial hemangiopericytoma/solitary fibrous tumor (SFT/HPC) is a rare type of neoplasm containing malignancies of infiltration, peritumoral edema, bleeding, or bone destruction. However, SFT/HPC has similar radiological characteristics as meningioma, which had different clinical managements and outcomes. This study aims to discriminate SFT/HPC and meningioma via deep learning approaches based on routine preoperative MRI.MethodsWe enrolled 236 patients with histopathological diagnosis of SFT/HPC (n = 144) and meningioma (n = 122) from 2010 to 2020 in Xiangya Hospital. Radiological features were extracted manually, and a radiological diagnostic model was applied for classification. And a deep learning pretrained model ResNet-50 was adapted to train T1-contrast images for predicting tumor class. Deep learning model attention mechanism was visualized by class activation maps.ResultsOur study reports that SFT/HPC was found to have more invasion to venous sinus (p = 0.001), more cystic components (p < 0.001), and more heterogeneous enhancement patterns (p < 0.001). Deep learning model achieved a high classification accuracy of 0.889 with receiver-operating characteristic curve area under the curve (AUC) of 0.91 in the validation set. Feature maps showed distinct clustering of SFT/HPC and meningioma in the training and test cohorts, respectively. And the attention of the deep learning model mainly focused on the tumor bulks that represented the solid texture features of both tumors for discrimination.
To solve the problem of poor quality in ghost imaging via sparsity constraints (GISC) multispectral image reconstruction with correlation operations and compressed sensing algorithms under low sampling rate detection conditions, we propose an endto-end deep-learning-based method. Based on the U-Net, Res2Net-SE-Conv is employed instead of convolutional blocks to extract local and global image features at a more fine-grained level while adaptively adjusting the channel feature response. The two-dimensional contextual transformer is constructed to fully use contextual correlation information to enhance the effectiveness of feature representations. We employ the two-dimensional contextual transformer in the decoder part, dubbed CoT-Unet, to reconstruct the desired 3D cube. The results show that compared with U-Net, TSA-Net based on spatial-spectral self-attention, the PSNR of reconstructed images by the CoT-Unet is improved by 5 dB and 3 dB, respectively, SSIM is improved by 0.23 and 0.07, and SAM is decreased by 0.06 and 0.58. Compared with conventional algorithms such as DGI and CS, our method significantly improves the quality of reconstructed images. Furthermore, the comparison results at 10%, 20%, and 30% sampling rates show that our approach has the best quality in reconstructing GISC multispectral images at low sampling rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.