Multi-modal image fusion objective is to combine complementary information obtained from multiple modalities into a single representation with increased reliability and interpretation. The images obtained from low-light visible cameras containing fine details of the scene and infrared cameras with high contrast details are the two modalities considered for fusion. In this paper, the low-light images with low target contrast are enhanced by using the phenomenon of stochastic resonance prior to fusion. Entropy is used as a measure to tune iteratively the coefficients using bistable system parameters. The combined advantage of multi scale decomposition approach and principal component analysis is utilized for the fusion of enhanced low-light visible and infrared images. Experimental results were carried out on different image datasets and analysis of the proposed methods were discussed.
Purpose Integrating complementary information with high-quality visual perception is essential in infrared and visible image fusion. Contrast-enhanced fusion required for target detection in military, navigation and surveillance applications, where visible images are captured at low-light conditions, is a challenging task. This paper aims to focus on the enhancement of poorly illuminated low-light images through decomposition prior to fusion, to provide high visual quality. Design/methodology/approach In this paper, a two-step process is implemented to improve the visual quality. First, the low-light visible image is decomposed to dark and bright image components. The decomposition is accomplished based on the selection of a threshold using Renyi’s entropy maximization. The decomposed dark and bright images are intensified with the stochastic resonance (SR) model. Second, texture information-based weighted average scheme for low-frequency coefficients and select maximum precept for high-frequency coefficients are used in the discrete wavelet transform (DWT) domain. Findings Simulations in MATLAB were carried out on various test images. The qualitative and quantitative evaluations of the proposed method show improvement in edge-based and information-based metrics compared to several existing fusion techniques. Originality/value In this work, a high-contrast, edge-preserved and brightness-improved image is obtained by the processing steps considered in this work to get good visual quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.