Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.
Feature extraction is a collection of the necessary detailed information from the given source, which holds the information for further analysis. The quality of the fused image depends on many parameters, particularly its directional selectivity and shift-invariance. On the other hand, the traditional wavelet-based transforms produce ringing distortions and artifacts due to poor directionality and shift invariance. The Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT) as a hybrid wavelet fusion algorithm overcomes the deficiencies of the traditional wavelet-based fusion algorithm and preserves the directional and shift invariance properties. The purpose of SWT is to decompose the given source image into approximate and detailed sub-bands. Further, approximate sub-bands of the given source are decomposed with DTCWT. In this extraction, low-frequency components are considered to implement Texture Energy Measures (TEM), and high-frequency components are considered to implement the absolute-maximum fusion rule. For the detailed sub-bands, the absolute-maximum fusion rule is implemented. The texture energy rules have significantly classified the image and improved the output image’s accuracy after fusion. Finally, inverse SWT is applied to generate an extended fused image. Experimental results are evaluated to show that the proposed approach outperforms approaches reported earlier. This paper proposes a fusion method based on SWT, DTCWT, and TEM to address the inherent defects of both the Parameter Adaptive-Dual Channel Pulse coupled neural network (PA-DCPCNN) and Multiscale Transform-Convolutional Sparse Representation (MST-CSR).
Abstract: Image fusion has been performed and reported in this paper for multi-focused images using Frequency Partition Discrete Cosine Transform (FP-DCT) with Modified Principal component analysis (MPCA) technique. The image fusion with decomposition at fixed levels may be treated as a very critical rule in the earlier image processing techniques. The frequency partitioning approach was used in this study to select the decomposition levels based on the pixel intensity and clarity. This paper also presents the modified PCA technique which provides dimensionality reduction. The wide range of quality evaluation metrics was computed to compare the fusion performance on the five images. Different techniques such as PCA, wavelet transforms with PCA, Multiresolution Singular Value Decomposition (MSVD) with PCA, Multiresolution DCT (MRDCT) with PCA, Frequency partitioning DCT (FP-DCT) with PCA were computed for comparison with the proposed FP-DCT Modified PCA (MPCA) technique. Images obtained after fusion process obtained by the method proposed shows enhanced visual quality, negligible information loss and discontinuities in the image than compared to other state of the art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.