Background: Image registration has long been an active research area in the society of medical image computing, which is to perform spatial transformation between a pair of images and establish a point-wise correspondence to achieve spatial consistency. Purpose: Previous work mainly focused on learning complicated deformation fields by maximizing the global-level (i.e., foreground plus background) image similarity.We argue that taking the background similarity into account may not be a good solution, if we only seek the accurate alignment of target organs/regions in real clinical practice. Methods: We, therefore, propose a novel concept of Salient Registration and introduce a novel deformable network equipped with a saliency module. Specifically, a multitask learning-based saliency module is proposed to discriminate the salient regions-of -registration in a semisupervised manner. Then, our deformable network analyzes the intensity and anatomical similarity of salient regions, and finally conducts the salient deformable registration. Results: We evaluate the efficacy of the proposed network on challenging abdominal multiorgan CT scans. The experimental results demonstrate that the proposed registration network outperforms other state-of -the-art methods, achieving a mean Dice similarity coefficient (DSC) of 40.2%,Hausdorff distance (95 HD) of 20.8 mm, and average symmetric surface distance (ASSD) of 4.58 mm. Moreover, even by training using one labeled data, our network can still attain satisfactory registration performance, with a mean DSC of 39.2%, 95 HD of 21.2 mm, and ASSD of 4.78 mm.
Conclusions:The proposed network provides an accurate solution for multiorgan registration and has the potential to be used for improving other registration applications. The code is publicly available at https://github.com/Rrrfrr/Salient-Deformable-Network.