Purpose
In current clinical practice, noisy and artifact‐ridden weekly cone beam computed tomography (CBCT) images are only used for patient setup during radiotherapy. Treatment planning is performed once at the beginning of the treatment using high‐quality planning CT (pCT) images and manual contours for organs‐at‐risk (OARs) structures. If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid‐treatment as well as for deriving biomarkers for treatment response.
Methods
Using a novel physics‐based data augmentation strategy, we synthesize a large dataset of perfectly/inherently registered pCT and synthetic‐CBCT pairs for locally advanced lung cancer patient cohort, which are then used in a multitask three‐dimensional (3D) deep learning framework to simultaneously segment and translate real weekly CBCT images to high‐quality pCT‐like images.
Results
We compared the synthetic CT and OAR segmentations generated by the model to real pCT and manual OAR segmentations and showed promising results. The real week 1 (baseline) CBCT images which had an average mean absolute error (MAE) of 162.77 HU compared to pCT images are translated to synthetic CT images that exhibit a drastically improved average MAE of 29.31 HU and average structural similarity of 92% with the pCT images. The average DICE scores of the 3D OARs segmentations are: lungs 0.96, heart 0.88, spinal cord 0.83, and esophagus 0.66.
Conclusions
We demonstrate an approach to translate artifact‐ridden CBCT images to high‐quality synthetic CT images, while simultaneously generating good quality segmentation masks for different OARs. This approach could allow clinicians to adjust treatment plans using only the routine low‐quality CBCT images, potentially improving patient outcomes. Our code, data, and pre‐trained models will be made available via our physics‐based data augmentation library, Physics‐ArX, at https://github.com/nadeemlab/Physics-ArX.