We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging.Methods: Clinical dynamic 18 F-DOPA brain PET/CT studies of 46 subjects with ten folds crossvalidation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25-90 min) from the initial 13 frames (0-25 min). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics.
Results:The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time-varying tracer biodistribution. The Bland-Altman plots reported the lowest tracer uptake bias (-0.04) for the putamen region and the smallest variance (95% CI: -0.38, +0.14) for the cerebellum. The region-wise Patlak graphical analysis in the caudate and putamen regions for 8 subjects from the test and validation dataset showed that the average bias for and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (p-value < 0.05), respectively.
Conclusion:We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 min time frames from the initial 25 min frames, thus enabling significant reduction in scanning time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.