Objective
Attenuation correction of PET data is commonly conducted through the utilization of a secondary imaging technique to produce attenuation maps. The customary approach to attenuation correction, which entails the employment of CT images, necessitates energy conversion. However, the present study introduces a novel deep learning-based method that obviates the requirement for CT images and energy conversion.
Methods
This study employs a residual Pix2Pix network to generate attenuation-corrected PET images using the 4033 2D PET images of 37 healthy adult brains for train and test. The model, implemented in TensorFlow and Keras, was evaluated by comparing image similarity, intensity correlation, and distribution against CT-AC images using metrics such as PSNR and SSIM for image similarity, while a 2D histogram plotted pixel intensities. Differences in standardized uptake values (SUV) demonstrated the model's efficiency compared to the CTAC method.
Results
The residual Pix2Pix demonstrated strong agreement with the CT-based attenuation correction, the proposed network yielding MAE, MSE, PSNR, and MS-SSIM values of 3×10-3, 2×10-4, 38.859, and 0.99, respectively. The residual Pix2Pix model's results showed a negligible mean SUV difference of 8×10-4(P-value = 0.10), indicating its accuracy in PET image correction. The residual Pix2Pix model exhibits high precision with a strong correlation coefficient of R2 = 0.99 to CT-based methods. The findings indicate that this approach surpasses the conventional method in terms of precision and efficacy.
Conclusions
The proposed residual Pix2Pix framework enables accurate and feasible attenuation correction of brain F-FDG PET without CT. However, clinical trials are required to evaluate its clinical performance. The PET images reconstructed by the framework have low errors compared to the accepted test reliability of PET/CT, indicating high quantitative similarity.