Reducing radiation dose is important for PET imaging. However, reducing injection doses causes increased image noise and low signal-to-noise ratio (SNR), subsequently affecting diagnostic and quantitative accuracies. Deep learning methods have shown a great potential to reduce the noise and improve the SNR in low dose PET data.In this work, we comprehensively investigated the quantitative accuracy of small lung nodules, in addition to visual image quality, using deep learning based denoising methods for oncological PET imaging. We applied and optimized an advanced deep learning method based on the U-net architecture to predict the standard dose PET image from 10% low-dose PET data. We also investigated the effect of different network architectures, image dimensions, labels and inputs for deep learning methods with respect to both noise reduction performance and quantitative accuracy. Normalized mean square error (NMSE), SNR, and standard uptake value (SUV) bias of different nodule regions of interest (ROIs) were used for evaluation.Our results showed that U-net and GAN are superior to CAE with smaller SUV mean and SUV max bias at the expense of inferior SNR. A fully 3D U-net has optimal quantitative performance compared to 2D and 2.5D U-net with less than 15% SUV mean bias for all the ten patients. U-net outperforms Residual U-net (r-U-net) in general with smaller NMSE, higher SNR and lower SUV max bias. Fully 3D U-net is superior to several existing denoising methods, including Gaussian filter, anatomical-guided non-local mean (NLM) filter, and MAP reconstruction with Quadratic prior and relative difference prior, in terms of superior image quality and trade-off between noise and bias. Furthermore, incorporating aligned CT images has the potential to further improve the quantitative accuracy in multi-channel U-net.We found the optimal architectures and parameters of deep learning based methods are different for absolute quantitative accuracy and visual image quality. Our quantitative results demonstrated that fully 3D U-net can both effectively reduce image noise and control bias even for sub-centimeter small lung nodules when generating standard dose PET using 10% low count down-sampled data.
Purpose Attenuation correction using CT transmission scanning increases the accuracy of single-photon emission computed tomography (SPECT) and enables quantitative analysis. Current existing SPECT-only systems normally do not support transmission scanning and therefore scans on these systems are susceptible to attenuation artifacts. Moreover, the use of CT scans also increases radiation dose to patients and significant artifacts can occur due to the misregistration between the SPECT and CT scans as a result of patient motion. The purpose of this study is to develop an approach to estimate attenuation maps directly from SPECT emission data using deep learning methods. Methods Both photopeak window and scatter window SPECT images were used as inputs to better utilize the underlying attenuation information embedded in the emission data. The CT-based attenuation maps were used as labels with which cardiac SPECT/CT images of 65 patients were included for training and testing. We implemented and evaluated deep fully convolutional neural networks using both standard training and training using an adversarial strategy. Results The synthetic attenuation maps were qualitatively and quantitatively consistent with the CT-based attenuation map. The globally normalized mean absolute error (NMAE) between the synthetic and CT-based attenuation maps were 3.60% ± 0.85% among the 25 testing subjects. The SPECT reconstructed images corrected using the CT-based attenuation map and synthetic attenuation map are highly consistent. The NMAE between the reconstructed SPECT images that were corrected using the synthetic and CT-based attenuation maps was 0.26% ± 0.15%, whereas the localized absolute percentage error was 1.33% ± 3.80% in the left ventricle (LV) myocardium and 1.07% ± 2.58% in the LV blood pool. Conclusion We developed a deep convolutional neural network to estimate attenuation maps for SPECT directly from the emission data. The proposed method is capable of generating highly reliable attenuation maps to facilitate attenuation correction for SPECT-only scanners for myocardial perfusion imaging.
Respiratory motion degrades the detection and quantification capabilities of PET/CT imaging. Moreover, mismatch between a fast helical CT image and a time-averaged PET image due to respiratory motion results in additional attenuation correction artifacts and inaccurate localization. Current motion compensation approaches typically have 3 limitations: the mismatch among respiration-gated PET images and the CT attenuation correction (CTAC) map can introduce artifacts in the gated PET reconstructions that can subsequently affect the accuracy of the motion estimation; sinogram-based correction approaches do not correct for intragate motion due to intracycle and intercycle breathing variations; and the mismatch between the PET motion compensation reference gate and the CT image can cause an additional CT-mismatch artifact. In this study, we established a motion correction framework to address these limitations. In the proposed framework, the combined emission-transmission reconstruction algorithm was used for phase-matched gated PET reconstructions to facilitate the motion model building. An event-by-event nonrigid respiratory motion compensation method with correlations between internal organ motion and external respiratory signals was used to correct both intracycle and intercycle breathing variations. The PET reference gate was automatically determined by a newly proposed CT-matching algorithm. We applied the new framework to 13 human datasets with 3 different radiotracers and 323 lesions and compared its performance with CTAC and non-attenuation correction (NAC) approaches. Validation using 4-dimensional CT was performed for one lung cancer dataset. For the 10 F-FDG studies, the proposed method outperformed ( < 0.006) both the CTAC and the NAC methods in terms of region-of-interest-based SUV, SUV, and SUV ratio improvements over no motion correction (SUV: 19.9% vs. 14.0% vs. 13.2%; SUV: 15.5% vs. 10.8% vs. 10.6%; SUV ratio: 24.1% vs. 17.6% vs. 16.2%, for the proposed, CTAC, and NAC methods, respectively). The proposed method increased SUV ratios over no motion correction for 94.4% of lesions, compared with 84.8% and 86.4% using the CTAC and NAC methods, respectively. For the 2 F-fluoropropyl-(+)-dihydrotetrabenazine studies, the proposed method reduced the CT-mismatch artifacts in the lower lung where the CTAC approach failed and maintained the quantification accuracy of bone marrow where the NAC approach failed. For theF-FMISO study, the proposed method outperformed both the CTAC and the NAC methods in terms of motion estimation accuracy at 2 lung lesion locations. The proposed PET/CT respiratory event-by-event motion-correction framework with motion information derived from matched attenuation-corrected PET data provides image quality superior to that of the CTAC and NAC methods for multiple tracers.
PET has the potential to perform absolute in vivo radiotracer quantitation. This potential can be compromised by voluntary body motion (BM), which degrades image resolution, alters apparent tracer uptakes, introduces CT-based attenuation correction mismatch artifacts and causes inaccurate parameter estimates in dynamic studies. Existing body motion correction (BMC) methods include frame-based image-registration (FIR) approaches and real-time motion tracking using external measurement devices. FIR does not correct for motion occurring within a pre-defined frame and the device-based method is generally not practical in routine clinical use, since it requires attaching a tracking device to the patient and additional device set up time. In this paper, we proposed a data-driven algorithm, centroid of distribution (COD), to detect BM. In this algorithm, the central coordinate of the time-of-flight (TOF) bin, which can be used as a reasonable surrogate for the annihilation point, is calculated for every event, and averaged over a certain time interval to generate a COD trace. We hypothesized that abrupt changes on the COD trace in lateral direction represent BMs. After detection, BM is estimated using non-rigid image registrations and corrected through list-mode reconstruction. The COD-based BMC approach was validated using a monkey study and was evaluated against FIR using four human and one dog studies with multiple tracers. The proposed approach successfully detected BMs and yielded superior correction results over conventional FIR approaches.
Introduction Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance. Methods In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers. Results The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity=92.15%±10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity=56.87%±21.64%; p < 0.01). Conclusion A convolutional neural network can be trained to segment placental blood vessels with near-human accuracy and can exceed the accuracy of novice human raters. Recombining these segmentations with the original fetoscopic video frames can produced enhanced frames in which blood vessels are easily detectable. This has significant implications for aiding fetoscopic surgeons—especially trainees who are not yet at an expert level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.