We present an efficient method for the fusion of medical captured images using different modalities that enhances the original images and combines the complementary information of the various modalities. The contourlet transform has mainly been employed as a fusion technique for images obtained from equal or different modalities. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in dual-tree complex contourlet transform (DT-CCT) by incorporating directional filter banks (DFB) into the DT-CWT. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. To improve the fused image quality, we propose a new method for fusion rules based on principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency components, PCA method is adopted and for high frequency components, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency components. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.
Medical image fusion has been used to derive useful information from multimodality medical image data. This paper presents a dual-tree complex contourlet transform (DT-CCT) based approach for the fusion of magnetic resonance image (MRI) and computed tomography (CT) image. The objective of the fusion of an MRI and a CT image of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in DT-CCT by incorporating directional filter banks (DFB) into the DT-CWT. To improve the fused image quality, we propose a new method for fusion rules based on the principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency coefficients, PCA method is adopted and for high frequency coefficients, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency coefficients. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.
Image registration methods based on mutual information criteria have been widely used in monomodal medical image registration and have shown promising results. Feature-based registration is an efficient technique for clinical use, because it can significantly reduce computational costs. In general, the majority of registration methods consist of the following four steps: feature extraction, feature matching, transformation of the models and, finally, resampling the image. It was noted that the accuracy of the registration process depends on matching a feature and control points (CP) detection. Therefore in this paper has been to rely on this feature for magnetic resonance image (MRI) monomodal registration. We have proposed to extract the salient edges and extracted a CP of medical images by using efficiency of multiresolution representation of data nonsubsampled contourlet transform (NSCT). The MR images were first decomposed using the NSCT, and then Edge and CP were extracted from bandpass directional subband of NSCT coefficients and some proposed rules. After edge and CP extraction, mutual information (MI) was adopted for the registration of feature points and translation parameters are calculated by using particle swarm optimization (PSO). We implement experiments to evaluate the performance of the NTSC and MI similarity measures for 2-D monomodal registration. The experimental results showed that the proposed method produces totally accurate performance for MRI monomodal registration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.