Deformable image registration (DIR) has the potential to improve modern radiotherapy in many aspects, including volume definition, treatment planning and image-guided adaptive radiotherapy. Studies have shown its possible clinical benefits. However, measuring DIR accuracy is difficult without known ground truth, but necessary before integration in the radiotherapy workflow. Visual assessment is an important step towards clinical acceptance. We propose a visualization framework which supports the exploration and the assessment of DIR accuracy. It offers different interaction and visualization features for exploration of candidate regions to simplify the process of visual assessment. The visualization is based on voxel-wise comparison of local image patches for which dissimilarity measures are computed and visualized to indicate locally the registration results. We performed an evaluation with three radiation oncologists to demonstrate the viability of our approach. In the evaluation, lung regions were rated by the participants with regards to their visual accuracy and compared to the registration error measured with expert defined landmarks. Regions rated as "accepted" had an average registration error of 1.8 mm, with the highest single landmark error being 3.3 mm. Additionally, survey results show that the proposed visualizations support a fast and intuitive investigation of DIR accuracy, and are suitable for finding even small errors.
In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error-mTRE-based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable method to automatically quantify the geometric accuracy of multimodal rigid registration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.