Whole-body attenuation correction (AC) is still challenging in combined PET/MR scanners. We describe Dixon-VIBE Deep Learning (DIVIDE), a deep learning network architecture that allows synthesizing pelvis pseudo-CT maps based only on the standard Dixon volumetric interpolated breath-hold examination (Dixon-VIBE) images currently acquired for AC in commercial Siemens scanners. We propose a network that performs a mapping between the four 2D Dixon MRI images (water, fat, in- and out-of-phase) and their corresponding 2D CT image. In contrast to previous methods, we used transposed convolutions to learn the up-sampling parameters, whole 2D slices to provide context information and pretrained the network with brain images. 28 datasets obtained from 19 patients who underwent PET/CT and PET/MR examinations were used to evaluate the proposed method. We assessed the accuracy of the µ-maps and reconstructed PET images by performing voxel- and region-based analysis comparing the standardize uptake values (SUVs, in g/mL) obtained after AC using the Dixon-VIBE (PETDixon), DIVIDE (PETDIVIDE) and CT-based (PETCT) methods. Additionally, the bias in quantification was estimated in synthetic lesions defined in the prostate, rectum, pelvis and spine. Absolute mean relative change (RC) values relative to CT AC were lower than 2% on average for the DIVIDE method in every region of interest (ROI) except for bone tissue where it was lower than 4% and 6.75 times smaller than the RC of the Dixon method. There was an excellent voxel-by-voxel correlation between PETCT and PETDIVIDE (R2=0.9998, p<0.01). The Bland-Altman plot between PETCT and PETDIVIDE showed that the average of the differences and the variability were lower (mean PETCT-PETDIVIDE SUV=0.0003, σ PETCT-PETDIVIDE=0.0094, CI0.95=[-0.0180,0.0188]) than the average of differences between PETCT and PETDixon (mean PETCT-PETDixon SUV=0.0006, σ PETCT-PETDixo = 0.0264, CI0.95=[-0.0510,0.0524]). Statistically significant changes in PET data quantification were observed between the two methods in the synthetic lesions with the largest improvement in femur and spine lesions. The DIVIDE method can accurately synthesize a pelvis pseudo-CT from standard Dixon-VIBE images, allowing for accurate AC in combined PET/MR scanners. Additionally, our implementation allows rapid pseudo-CT synthesis, making it suitable for routine applications and, even allowing the retrospective processing of Dixon-VIBE data.
Objective: This study aimed to prove the concept of a new optical video-based system to measure Parkinson's disease (PD) remotely using an accessible standard webcam.Methods: We consecutively enrolled a cohort of 42 patients with PD and healthy subjects (HSs). The participants were recorded performing MDS-UPDRS III bradykinesia upper limb tasks with a computer webcam. The video frames were processed using the artificial intelligence algorithms tracking the movements of the hands. The video extracted features were correlated with clinical rating using the Movement Disorder Society revision of the Unified Parkinson's Disease Rating Scale and inertial measurement units (IMUs). The developed classifiers were validated on an independent dataset.Results: We found significant differences in the motor performance of the patients with PD and HSs in all the bradykinesia upper limb motor tasks. The best performing classifiers were unilateral finger tapping and hand movement speed. The model correlated both with the IMUs for quantitative assessment of motor function and the clinical scales, hence demonstrating concurrent validity with the existing methods.Conclusions: We present here the proof-of-concept of a novel webcam-based technology to remotely detect the parkinsonian features using artificial intelligence. This method has preliminarily achieved a very high diagnostic accuracy and could be easily expanded to other disease manifestations to support PD management.
Typically, pseudo-Computerized Tomography (CT) synthesis schemes proposed in the literature rely on complete atlases acquired with the same field of view (FOV) as the input volume. However, clinical CTs are usually acquired in a reduced FOV to decrease patient ionization. In this work, we present the Franken-CT approach, showing how the use of a non-parametric atlas composed of diverse anatomical overlapping Magnetic Resonance (MR)-CT scans and deep learning methods based on the U-net architecture enable synthesizing extended head and neck pseudo-CTs. Visual inspection of the results shows the high quality of the pseudo-CT and the robustness of the method, which is able to capture the details of the bone contours despite synthesizing the resulting image from knowledge obtained from images acquired with a completely different FOV. The experimental Zero-Normalized Cross-Correlation (ZNCC) reports a 0.9367 ± 0.0138 (mean ± SD) and 95% confidence interval (0.9221, 0.9512); the experimental Mean Absolute Error (MAE) reports 73.9149 ± 9.2101 HU and 95% confidence interval (66.3383, 81.4915); the Structural Similarity Index Measure (SSIM) reports 0.9943 ± 0.0009 and 95% confidence interval (0.9935, 0.9951); and the experimental Dice coefficient for bone tissue reports 0.7051 ± 0.1126 and 95% confidence interval (0.6125, 0.7977). The voxel-by-voxel correlation plot shows an excellent correlation between pseudo-CT and ground-truth CT Hounsfield Units (m = 0.87; adjusted R2 = 0.91; p < 0.001). The Bland–Altman plot shows that the average of the differences is low (−38.6471 ± 199.6100; 95% CI (−429.8827, 352.5884)). This work serves as a proof of concept to demonstrate the great potential of deep learning methods for pseudo-CT synthesis and their great potential using real clinical datasets.
Attenuation correction (AC) remains a challenge in pelvis PET/MR imaging. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvis attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, which can introduce bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3D CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semi-automated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semi-automated segmentations, with a mean Dice similarity coefficient of 0.75. Volumetric similarity score between two segmentations was 0.85 0.14. The mean absolute relative change (RCs) with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning and model-based μ-maps, respectively. The average RC between PET images reconstructed with deep learning and CT-based μ-maps was 2.6%. Conclusion: We presented a deep learningbased method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images with comparable accuracy to semi-automatic segmentations. We also showed that the μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images are more accurate than those generated with the model-based approach available on integrated PET/MRI scanner.
This paper provides an overview of the different deep convolutional neural network (DCNNs) architectures that have been investigated in the past years for the generation of synthetic computed tomography (CT) or pseudo-CT from magnetic resonance (MR). The U-net, the Atrous-net and the Residual-net architectures were analyzed, implemented and compared. Each network was implemented using 2D filters and 3D filters with 2D slices and 3D patches respectively as inputs. Two datasets were used for training and evaluation. The first one is composed by pairs of 3D T1-weighted MR and Low-dose CT images from the head of 19 healthy women. The second database contains dual echo Dixon-VIBE MR images and CT images from the pelvis of 13 colorectal and 6 prostate cancer patients. Bone structures in the target anatomy were key in choosing the right deep learning approach. This work provides a deep explanation of the architectures in order to know which DCNN fits better each medical application. According to this study, the 3D U-net architecture would be the best option to generate head pseudo-CTs while the 2D Residual-net provides the most accurate results for the pelvis anatomy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.