The stripe noise effects severely degrade the image quality in infrared imaging systems. The existing destriping algorithms still struggle to balance noise suppression, detail preservation, and real-time performance, which retards their application in spectral imaging and signal processing field. To solve this problem, an innovative wavelet deep neural network from the perspective of transform domain is presented in this paper, which takes the intrinsic characteristics of stripe noise and complementary information between the coefficients of different wavelet sub-bands into full consideration to accurately estimate the noise with the lower computational load. In addition, a special directional regularizer is further defined to separate the scene details from stripe noise more thoroughly and recover the details more accurately. The extensive experiments on simulated and real data demonstrate that our proposed method outperforms several classical destriping methods on both quantitative and qualitative assessments. INDEX TERMS Neural networks, image denoising, infrared image sensors, wavelet transforms.
Existing fixed pattern noise reduction (FPNR) methods are easily affected by the motion state of the scene and working condition of the image sensor, which leads to over smooth effects, ghosting artifacts as well as slow convergence rate. To address these issues, we design an innovative cascade convolution neural network (CNN) model with residual skip connections to realize single frame blind FPNR operation without any parameter tuning. Moreover, a coarse-fine convolution (CF-Conv) unit is introduced to extract complementary features in various scales and fuse them to pick more spatial information. Inspired by the success of the visual attention mechanism, we further propose a particular spatial-channel noise attention unit (SCNAU) to separate the scene details from fixed pattern noise more thoroughly and recover the real scene more accurately.Experimental results on test data demonstrate that the proposed cascade CNN-FPNR method outperforms the existing FPNR methods in both of visual effect and quantitative assessment. affected by the fixed pattern noise (FPN), which is mainly caused by the spatial non-uniform response of individual detectors in the sensor [6][7]. More seriously, spatial FPN generally drifts with time, which makes the problem be more challenging [8][9][10][11]. As a result, the FPN causes a significant decline in imaging quality and decreases the precision for object detection and recognition. To meet this challenge, the cost-effective fixed pattern noise reduction (FPNR) techniques based on signal processing are continually investigated and applied in nearly all the infrared imaging systems.Existing FPNR algorithms are mainly divided into two primary categories: reference-based FPNR (RB-FPNR) and scene-based FPNR (SB-FPNR) [12][13][14]. The RB-FPNR methods remove the FPN according to fixed calibration parameters calculated from the response of blackbody radiation at different temperatures [15]. Unfortunately, such a calibration requires the camera to halt the normal operation and update the calibration parameters due to the inherent temporal drift of detector characteristics [16]. Given this fact, most of the recent researches have focused on developing SB-FPNR methods, such as neural networks (NN) [17], temporal high-pass filter (THPF) [18,19] and constant-statistics (CS) method [20][21]. As for SB-FPNR algorithms, the calibration parameters are iteratively updated by utilizing the information extracted from inter-frame motion, therefore, ghosting artifacts and over smooth effects resulted from the sudden deceleration of scene motion often seriously degrade the noise reduction performance, moreover, the relatively slow convergence process occurred in scene switching is unacceptable for most of the practical applications.In recent years, convolution neural network (CNN) [22] models were explored deeply and applied in various image processing tasks [23], such as image super resolution [24,25], image denoising [26], and sketch synthesis [27][28][29]. To the best of our knowledge, CNN based FPNR m...
To realize the multi-focus image fusion task, an end-to-end deep convolutional neural network (DCNN) model that produces the final fused image directly from the source images is presented in this paper. In order to promote the fusion precision, the innovative multi-focus fusion DCNN introduces a multi-scale feature extraction (MFE) unit to collect more complementary features from different spatial scales and fuse them to excavate more spatial information. Moreover, a visual attention unit is designed to help the network locate the focused region more accurately and pick more useful features for perfectly splicing the details in the fusion process. Experimental results illustrate that the proposed method is superior to several existing multi-focus image fusion methods in both of the subjective visual effects and objective quality metrics. INDEX TERMS Image fusion, multi-focus, convolution neural network, multi-scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.