Background Various fusion strategies (feature-level fusion, matrix-level fusion, and image-level fusion) were used to fuse PET and MR images, which might lead to different feature values and classification performance. The purpose of this study was to measure the classification capability of features extracted using various PET/MR fusion methods in a dataset of soft-tissue sarcoma (STS). Methods The retrospective dataset included 51 patients with histologically proven STS. All patients had pre-treatment PET and MR images. The image-level fusion was conducted using discrete wavelet transformation (DWT). During the DWT process, the MR weight was set as 0.1, 0.2, 0.3, 0.4, …, 0.9. And the corresponding PET weight was set as 1- (MR weight). The fused PET/MR images was generated using the inverse DWT. The matrix-level fusion was conducted by fusing the feature calculation matrix during the feature extracting process. The feature-level fusion was conducted by concatenating and averaging the features. We measured the predictive performance of features using univariate analysis and multivariable analysis. The univariate analysis included the Mann-Whitney U test and receiver operating characteristic (ROC) analysis. The multivariable analysis was used to develop the signatures by jointing the maximum relevance minimum redundancy method and multivariable logistic regression. The area under the ROC curve (AUC) value was calculated to evaluate the classification performance. Results By using the univariate analysis, the features extracted using image-level fusion method showed the optimal classification performance. For the multivariable analysis, the signatures developed using the image-level fusion-based features showed the best performance. For the T1/PET image-level fusion, the signature developed using the MR weight of 0.1 showed the optimal performance (0.9524(95% confidence interval (CI), 0.8413–0.9999)). For the T2/PET image-level fusion, the signature developed using the MR weight of 0.3 showed the optimal performance (0.9048(95%CI, 0.7356–0.9999)). Conclusions For the fusion of PET/MR images in patients with STS, the signatures developed using the image-level fusion-based features showed the optimal classification performance than the signatures developed using the feature-level fusion and matrix-level fusion-based features, as well as the single modality features. The image-level fusion method was more recommended to fuse PET/MR images in future radiomics studies.
Objective: To demonstrate similar image quality with deep learning image reconstruction (DLIR) in reduced contrast medium (CM) and radiation dose (double-low-dose) head computed tomography (CT) angiography (CTA), in comparison with standard-dose and adaptive statistical iterative reconstruction-Veo (ASIR-V). Methods: A prospective study was performed in 63 patients who under head CTA using 256-slice CT. Patients were randomized into either the standard-dose group (n = 38) with 40 ml of Iopromide (370 mgI ml−1 at 4.5 ml s−1); or a double-low-dose group (n = 25) with CM of 25 ml at 3.0 ml s−1. For image reconstruction, the double-low-dose group used DLIR-M and DLIR-H strength, and the standard-dose group used ASIR-V with 50% strength. The CT value and standard deviation (SD), SNR and CNR of posterior fossa, neck muscles, carotid, vertebral and middle cerebral arteries were measured. The image noise, vessel edge and structure blurring and overall image quality were assessed by using a 5-grade method. The double-low-dose group reduced CM dose by 37.5% and CT dose index (CTDIvol) by 41% compared with the standard-dose group. DLIR further reduced the SD value of the middle cerebral artery and posterior fossa and provided better overall subjective image quality (p < 0.05). Conclusions: DLIR significantly reduces image noise and provides higher overall image quality in the double-low-dose CTA. Advance in knowledge: It is feasible to reduce CM dose by 37.5% and volume CTDI by 41% with the combination of 80kVp and DLIR in head CTA. Compared with ASIR-V, DLIR further reduces image noise and achieves better image quality with reduced contrast and radiation dose.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.