Purpose
To quickly and automatically propagate organ contours from pretreatment to fraction images in magnetic resonance (MR)‐guided prostate external‐beam radiotherapy.
Methods
Five prostate cancer patients underwent 20 fractions of image‐guided external‐beam radiotherapy on a 1.5 T MR‐Linac system. For each patient, a pretreatment T2‐weighted three‐dimensional (3D) MR imaging (MRI) scan was used to delineate the clinical target volume (CTV) contours. The same scan was repeated during each fraction, with the CTV contour being manually adapted if necessary. A convolutional neural network (CNN) was trained for combined image registration and contour propagation. The network estimated the propagated contour and a deformation field between the two input images. The training set consisted of a synthetically generated ground truth of randomly deformed images and prostate segmentations. We performed a leave‐one‐out cross‐validation on the five patients and propagated the prostate segmentations from the pretreatment to the fraction scans. Three variants of the CNN, aimed at investigating supervision based on optimizing segmentation overlap, optimizing the registration, and a combination of the two were compared to results of the open‐source deformable registration software package Elastix.
Results
The neural networks trained on segmentation overlap or the combined objective achieved significantly better Hausdorff distances between predicted and ground truth contours than Elastix, at the much faster registration speed of 0.5 s. The CNN variant trained to optimize both the prostate overlap and deformation field, and the variant trained to only maximize the prostate overlap, produced the best propagation results.
Conclusions
A CNN trained on maximizing prostate overlap and minimizing registration errors provides a fast and accurate method for deformable contour propagation for prostate MR‐guided radiotherapy.