Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (F-fluorinated--3-fluoropropyl-2-β-carboxymethoxy-3-β-(4-iodophenyl)nortropane [F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution and μ-map (μ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to μ-CT of the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) and μ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those μ-maps. The CNNs generated less noisy and more uniform μ-maps than the original μ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar μ-maps to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.
We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18 F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution (l-MLAA) and μ-map (μ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (μ-CNN), μ-MLAA, and 4-segment method (μ-segment) were compared with the μ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the μ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between μ-CNN and μ-CT was 0.77, which was significantly higher than that between μ-MLAA and μ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies. http://jnm.snmjournals.org/content/60/8/1183This article and updated information are available at: http://jnm.snmjournals.org/site/subscriptions/online.xhtml Information about subscriptions to JNM can be found at: http://jnm.snmjournals.org/site/misc/permission.xhtml
The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.
The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.