Purpose Volumetric medical image registration has important clinical significance. Traditional registration methods may be time‐consuming when processing large volumetric data due to their iterative optimizations. In contrast, existing deep learning‐based networks can obtain the registration quickly. However, most of them require independent rigid alignment before deformable registration; these two steps are often performed separately and cannot be end‐to‐end. Methods We propose an end‐to‐end joint affine and deformable network for three‐dimensional (3D) medical image registration. The proposed network combines two deformation methods; the first one is for obtaining affine alignment and the second one is a deformable subnetwork for achieving the nonrigid registration. The parameters of the two subnetworks are shared. The global and local similarity measures are used as loss functions for the two subnetworks, respectively. Moreover, an anatomical similarity loss is devised to weakly supervise the training of the whole registration network. Finally, the trained network can perform deformable registration in one forward pass. Results The efficacy of our network was extensively evaluated on three public brain MRI datasets including Mindboggle101, LPBA40, and IXI. Experimental results demonstrate our network consistently outperformed several state‐of‐the‐art methods with respect to the metrics of Dice index (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD). Conclusions The proposed network provides accurate and robust volumetric registration without any pre‐alignment requirement, which facilitates the end‐to‐end deformable registration.
Deformable image registration is of essential important for clinical diagnosis, treatment planning, and surgical navigation. However, most existing registration solutions require separate rigid alignment before deformable registration, and may not well handle the large deformation circumstances. We propose a novel edge-aware pyramidal deformable network (referred as EPReg) for unsupervised volumetric registration. Specifically, we propose to fully exploit the useful complementary information from the multi-level feature pyramids to predict multi-scale displacement fields. Such coarse-to-fine estimation facilitates the progressive refinement of the predicted registration field, which enables our network to handle large deformations between volumetric data. In addition, we integrate edge information with the original images as dual-inputs, which enhances the texture structures of image content, to impel the proposed network pay extra attention to the edge-aware information for structure alignment. The efficacy of our EPReg was extensively evaluated on three public brain MRI datasets including Mindboggle101, LPBA40, and IXI30. Experiments demonstrate our EPReg consistently outperformed several cutting-edge methods with respect to the metrics of Dice index (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD). The proposed EPReg is a general solution for the problem of deformable volumetric registration.
Background: Image registration has long been an active research area in the society of medical image computing, which is to perform spatial transformation between a pair of images and establish a point-wise correspondence to achieve spatial consistency. Purpose: Previous work mainly focused on learning complicated deformation fields by maximizing the global-level (i.e., foreground plus background) image similarity.We argue that taking the background similarity into account may not be a good solution, if we only seek the accurate alignment of target organs/regions in real clinical practice. Methods: We, therefore, propose a novel concept of Salient Registration and introduce a novel deformable network equipped with a saliency module. Specifically, a multitask learning-based saliency module is proposed to discriminate the salient regions-of -registration in a semisupervised manner. Then, our deformable network analyzes the intensity and anatomical similarity of salient regions, and finally conducts the salient deformable registration. Results: We evaluate the efficacy of the proposed network on challenging abdominal multiorgan CT scans. The experimental results demonstrate that the proposed registration network outperforms other state-of -the-art methods, achieving a mean Dice similarity coefficient (DSC) of 40.2%,Hausdorff distance (95 HD) of 20.8 mm, and average symmetric surface distance (ASSD) of 4.58 mm. Moreover, even by training using one labeled data, our network can still attain satisfactory registration performance, with a mean DSC of 39.2%, 95 HD of 21.2 mm, and ASSD of 4.78 mm. Conclusions:The proposed network provides an accurate solution for multiorgan registration and has the potential to be used for improving other registration applications. The code is publicly available at https://github.com/Rrrfrr/Salient-Deformable-Network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.