Targeted radionuclide therapy (TRT) is a promising technique for cancer therapy. However, in order to deliver the required dose to the tumor, minimize potential toxicity in normal organs, as well as monitor therapeutic effects, it is important to assess the individualized internal dosimetry based on patient-specific data. Advanced imaging techniques, especially radionuclide imaging, can be used to determine the spatial distribution of administered tracers for calculating the organ-absorbed dose. While planar scintigraphy is still the mainstream imaging method, SPECT, PET and bremsstrahlung imaging have promising properties to improve accuracy in quantification. This article reviews the basic principles of TRT and discusses the latest development in radionuclide imaging techniques for different theranostic agents, with emphasis on their potential to improve personalized TRT dosimetry.
Positron emission tomography (PET) is an ill-posed inverse problem and suffers high noise due to limited number of detected events. Prior information can be used to improve the quality of reconstructed PET images. Deep neural networks have also been applied to regularized image reconstruction. One method is to use a pretrained denoising neural network to represent the PET image and to perform a constrained maximum likelihood estimation. In this work, we propose to use a generative adversarial network (GAN) to further improve the network performance. We also modify the objective function to include a data-matching term on the network input. Experimental studies using computer-based Monte Carlo simulations and real patient datasets demonstrate that the proposed method leads to noticeable improvements over the kernel-based and U-net-based regularization methods in terms of lesion contrast recovery versus background noise trade-offs.
Artifacts caused by patient breathing and movement during PET data acquisition affect image quality. Respiratory gating is commonly used to gate the list-mode PET data into multiple bins over a respiratory cycle. Non-rigid registration of respiratory-gated PET images can reduce motion artifacts and preserve count statistics, but it is time consuming. In this work, we propose an unsupervised non-rigid image registration framework using deep learning for motion correction. Our network uses a differentiable spatial transformer layer to warp the moving image to the fixed image and uses a stacked structure for deformation field refinement. Estimated deformation fields were incorporated into an iterative image reconstruction algorithm to perform motion compensated PET image reconstruction. We validated the proposed method using simulation and clinical data and implemented an iterative image registration approach for comparison. Motion compensated reconstructions were compared with ungated images. Our simulation study showed that the motion compensated methods can generate images with sharp boundaries and reveal more details in the heart region compared with the ungated image. The resulting normalized root mean square error (NRMS) was 24.3 ± 1.7% for the deep learning based motion correction, 31.1 ± 1.4% for the iterative registration based motion correction, and 41.9 ± 2.0% for ungated reconstruction. The proposed deep learning based motion correction reduced the bias compared with the ungated image without increasing the noise level and outperformed the iterative registration based method. In the real data study, both motion compensated images provided higher lesion contrast and sharper liver boundaries than the ungated image and had lower noise than the reference gate image. The contrast of the proposed method based on the deep neural network was higher than the ungated image and iterative registration method at any matched noise level.
We conclude that if both sequential SPECT/CT scans are available, CT organ-based registration method can more effectively improve the 3D dose estimation. Sequential low-dose CT scans might be considered to be included in the standard TRT protocol.
Purpose The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co‐learning three‐dimensional (3D) convolutional neural network (CNN) to extract modality‐specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. Methods We used a pretrained deep neural network to represent PET images. The network was trained using low‐count PET and CT image pairs as inputs and high‐count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer‐based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel‐based reconstruction and a CNN‐based deep penalty method with and without anatomical guidance. Results Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN‐based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. Conclusions The supervised co‐learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade‐off curve, which can potentially improve lesion detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.