Deep learning has attracted growing interest for application to medical imaging, such as positron emission tomography (PET), due to its excellent performance. Convolutional neural networks (CNNs), a facet of deep learning requires large training-image datasets. This presents a challenge in a clinical setting because it is difficult to prepare large, high-quality patient-related datasets. Recently, the deep image prior (DIP) approach has been devised, based on the fact that CNN structures have the intrinsic ability to solve inverse problems such as denoising without pre-training and do not require the preparation of training datasets. Herein, we proposed the dynamic PET image denoising using a DIP approach, with the PET data itself being used to reduce the statistical image noise. Static PET data were acquired for input to the network, with the dynamic PET images being handled as training labels, while the denoised dynamic PET images were represented by the network output. We applied the proposed DIP method to computer simulations and also to real data acquired from a living monkey brain with 18 F-fluoro-2-deoxy-D-glucose ( 18 F-FDG). As a simulation result, our DIP method produced less noisy and more accurate dynamic images than the other algorithms. Moreover, using real data, the DIP method was found to perform better than other types of post-denoising method in terms of contrast-to-noise ratio, and also maintain the contrast-to-noise ratio when resampling the list data to 1/5 and 1/10 of the original size, demonstrating that the DIP method could be applied to low-dose PET imaging. These results indicated that the proposed DIP method provides a promising means of post-denoising for dynamic PET images.INDEX TERMS Convolutional neural networks, deep image prior, deep learning, denoising, dynamic positron emission tomography.
A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8 × 8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq as measured using an F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter × 70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.
Although convolutional neural networks (CNNs) demonstrate the superior performance in denoising positron emission tomography (PET) images, a supervised training of the CNN requires a pair of large, high-quality PET image datasets. As an unsupervised learning method, a deep image prior (DIP) has recently been proposed; it can perform denoising with only the target image. In this study, we propose an innovative procedure for the DIP approach with a four-dimensional (4D) branch CNN architecture in end-to-end training to denoise dynamic PET images. Our proposed 4D CNN architecture can be applied to end-to-end dynamic PET image denoising by introducing a feature extractor and a reconstruction branch for each time frame of the dynamic PET image. In the proposed DIP method, it is not necessary to prepare high-quality and large patient-related PET images. Instead, a subject’s own static PET image is used as additional information, dynamic PET images are treated as training labels, and denoised dynamic PET images are obtained from the CNN outputs. Both simulation with [18F]fluoro-2-deoxy-D-glucose (FDG) and preclinical data with [18F]FDG and [11C]raclopride were used to evaluate the proposed framework. The results showed that our 4D DIP framework quantitatively and qualitatively outperformed 3D DIP and other unsupervised denoising methods. The proposed 4D DIP framework thus provides a promising procedure for dynamic PET image denoising.
Purpose Measurements of macular pigment optical density (MPOD) by the autofluorescence technique yield underestimations of actual values in eyes with cataract. We applied deep learning (DL) to correct this error. Subjects and Methods MPOD was measured by SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany) in 197 eyes before and after cataract surgery. The nominal MPOD values (= preoperative value) were corrected by three methods: the regression equation (RE) method, subjective classification (SC) method (described in our previous study), and DL method. The errors between the corrected and true values (= postoperative value) were calculated for local MPODs at 0.25°, 0.5°, 1°, and 2° eccentricities and macular pigment optical volume (MPOV) within 9° eccentricity. Results The mean error for MPODs at four eccentricities was 32% without any correction, 15% with correction by RE, 16% with correction by SC, and 14% with correction by DL. The mean error for MPOV was 21% without correction and 14%, 10%, and 10%, respectively, with correction by the same methods. The errors with any correction were significantly lower than those without correction ( P < 0.001, linear mixed model with Tukey's test). The errors with DL correction were significantly lower than those with RE correction in MPOD at 1° eccentricity and MPOV ( P < 0.001) and were equivalent to those with SC correction. Conclusions The objective method using DL was useful to correct MPOD values measured in aged people. Translational Relevance MPOD can be obtained with small errors in eyes with cataract using DL.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.