Background: Urolithiasis is a global disease with a high incidence and recurrence rate, and stone composition is closely related to the choice of treatment and preventive measures. Calcium oxalate monohydrate (COM) is the most common in clinical practice, which is hard and difficult to fragment.Preoperative identification of its components and selection of effective surgical methods can reduce the risk of patients having a second operation. Methods that can be used for stone composition analysis include infrared spectroscopy, X-ray diffraction, and polarized light microscopy, but they are all performed on stone specimens in vitro after surgery. This study aimed to design and develop an artificial intelligence (AI) model based on unenhanced computed tomography (CT) images of the urinary tract, and to investigate the predictive ability of the model for COM stones in vivo preoperatively, so as to provide surgeons with more accurate diagnostic information.Methods: Preoperative unenhanced CT images of patients with urinary calculi whose components were determined by infrared spectroscopy in a single center were retrospectively analyzed, including 337 cases of COM stones and 170 of non-COM stones. All images were manually segmented and the image features were extracted, and randomly divided into the training and testing sets in a ratio of 7:3. The least absolute shrinkage and selection operation algorithm (LASSO) was used to construct the AI model, and classification of the training and testing sets was carried out.Results: A total of 1,218 radiomics imaging features were extracted, and 8 features with non-zero coefficients were finally obtained. The sensitivity, specificity and accuracy of the AI model were 90.5%, 84.3% and 88.5% for the training set, and 90.1%, 84.3% and 88.3% for the testing set. The area under the curve was 0.935 for the training set and 0.933 for the testing set. Conclusions:The AI model based on unenhanced CT images of the urinary tract can predict COM and non-COM stones in vivo preoperatively, and the model has high sensitivity, specificity and accuracy.
PurposeGlioma is the most common primary brain tumor, with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtype classification using multimodal MRI images.MethodMRI images of four modalities for 221 glioma patients were collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) MRI images, to classify astrocytoma, oligodendroglioma, and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. First, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max–min normalization. Subsequently, a deep learning-based network was designed based on a unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on a decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in the training stage. Finally, the performance of our method was evaluated in terms of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc.ResultsThe proposed method achieved an accuracy of 0.878, an AUC of 0.902, a sensitivity of 0.772, a specificity of 0.930, a PPV of 0.862, an NPV of 0.949, and a Cohen’s Kappa of 0.773, which showed a significantly higher performance than existing state-of-the-art methods.ConclusionCompared with current studies, this study demonstrated the effectiveness and superiority in the overall performance of our proposed multimodal MRI image decision fusion-based network method for glioma subtype classification, which would be of enormous potential value in clinical practice.
Glioma grading before surgery is very critical for the prognosis prediction and treatment plan making. We present a novel wavelet scattering-based radiomic method to predict noninvasively and accurately the glioma grades. The method consists of wavelet scattering feature extraction, dimensionality reduction, and glioma grade prediction. The dimensionality reduction was achieved using partial least squares (PLS) regression and the glioma grade prediction using support vector machine (SVM), logistic regression (LR) and random forest (RF). The prediction obtained on multimodal magnetic resonance images of 285 patients with well-labeled intratumoral and peritumoral regions showed that the area under the receiver operating characteristic curve (AUC) of glioma grade prediction was increased up to 0.99 when considering both intratumoral and peritumoral features in multimodal images, which represents an increase of about 13% compared to traditional radiomics. In addition, the features extracted from peritumoral regions further increase the accuracy of glioma grading.
Diffusion magnetic resonance imaging (dMRI) is an indispensable technique in today’s neurological research, but its signal acquisition time is extremely long due to the need to acquire signals in multiple diffusion gradient directions. Supervised deep learning methods often require large amounts of complete data to support training, whereas dMRI data are difficult to obtain. We propose a deep learning model for the fast reconstruction of high angular resolution diffusion imaging in data-unpaired scenarios. Firstly, two convolutional neural networks were designed for the recovery of k-space and q-space signals, while training with unpaired data was achieved by reducing the uncertainty of the prediction results of different reconstruction orders. Then, we enabled the model to handle noisy data by using graph framelet transform. To evaluate the performance of our model, we conducted detailed comparative experiments using the public dataset from human connectome projects and compared it with various state-of-the-art methods. To demonstrate the effectiveness of each module of our model, we also conducted reasonable ablation experiments. The final results showed that our model has high efficiency and superior reconstruction performance.
To promote the generalization ability of breast tumor segmentation models, as well as to improve the segmentation performance for breast tumors with smaller size, low-contrast amd irregular shape, we propose a progressive dual priori network (PDPNet) to segment breast tumors from dynamic enhanced magnetic resonance images (DCE-MRI) acquired at different sites. The PDPNet first cropped tumor regions with a coarse-segmentation based localization module, then the breast tumor mask was progressively refined by using the weak semantic priori and cross-scale correlation prior knowledge. To validate the effectiveness of PDPNet, we compared it with several stateof-the-art methods on multi-center datasets. The results showed that, comparing against the suboptimal method, the DSC, SEN, KAPPA and HD95 of PDPNet were improved 3.63%, 8.19%, 5.52%, and 3.66% respectively. In addition, through ablations, we demonstrated that the proposed localization module can decrease the influence of normal tissues and therefore improve the generalization ability of the model. The weak semantic priors allow focusing on tumor regions to avoid missing small tumors and low-contrast tumors. The cross-scale correlation priors are beneficial for promoting the shape-aware ability for irregual tumors. Thus integrating them in a unified framework improved the multicenter breast tumor segmentation performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.