Purpose
Image‐guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs‐at‐risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC‐GAN) is developed for fully automated H&N OARs segmentation on CT and low‐field MRI.
Methods and material
A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel‐wise prediction. A convolutional neural network (CNN)‐based discriminator network is then utilized to correct predicted errors and image‐level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR‐guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC‐GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC‐GAN‐ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation.
Results
The proposed SC‐GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low‐field MR dataset, the following average Dice's indices were obtained using improved SC‐GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC‐GAN performance is superior to SC‐GAN‐ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU.
Conclusion
The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low‐field MR images acquired on a MR‐guided radiation radiotherapy syste...