Purpose
Stent has often been used as an internal surrogate to monitor intrafraction tumor motion during pancreatic cancer radiotherapy. Based on the stent contours generated from planning CT images, the current intrafraction motion review (IMR) system on Varian TrueBeam only provides a tool to verify the stent motion visually but lacks quantitative information. The purpose of this study is to develop an automatic stent recognition method for quantitative intrafraction tumor motion monitoring in pancreatic cancer treatment.
Methods
A total of 535 IMR images from 14 pancreatic cancer patients were retrospectively selected in this study, with the manual contour of the stent on each image serving as the ground truth. We developed a deep learning–based approach that integrates two mechanisms that focus on the features of the segmentation target. The objective attention modeling was integrated into the U‐net framework to deal with the optimization difficulties when training a deep network with 2D IMR images and limited training data. A perceptual loss was combined with the binary cross‐entropy loss and a Dice loss for supervision. The deep neural network was trained to capture more contextual information to predict binary stent masks. A random‐split test was performed, with images of ten patients (71%, 380 images) randomly selected for training, whereas the rest of four patients (29%, 155 images) were used for testing. Sevenfold cross‐validation of the proposed PAUnet on the 14 patients was performed for further evaluation.
Results
Our stent segmentation results were compared with the manually segmented contours. For the random‐split test, the trained model achieved a mean (±standard deviation) stent Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), the center‐of‐mass distance (CMD), and volume difference Voldiff$Vo{l_{diff}}$ were 0.96 (±0.01), 1.01 (±0.55) mm, 0.66 (±0.46) mm, and 3.07% (±2.37%), respectively. The sevenfold cross‐validation of the proposed PAUnet had the mean (±standard deviation) of 0.96 (±0.02), 0.72 (±0.49) mm, 0.85 (±0.96) mm, and 3.47% (±3.27%) for the DSC, HD95, CMD, and Voldiff$Vo{l_{diff}}$.
Conclusion
We developed a novel deep learning–based approach to automatically segment the stent from IMR images, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy.