High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. The technique of pan-sharpening wants to generate high-resolution multi-spectral (MS) images based on a panchromatic (PAN) image and the low-resolution counterpart. The conventional deep learning based pan-sharpening methods process the panchromatic and the low-resolution image in a feedforward manner where shallow layers fail to access useful information from deep layers. To make full use of the powerful deep features that have strong representation ability, we propose a two-path network with feedback connections, through which the deep features can be rerouted for refining the shallow features in a feedback manner. Specifically, we leverage the structure of a recurrent neural network to pass the feedback information. Besides, a power feature extraction block with multiple projection pairs is designed to handle the feedback information and to produce power deep features. Extensive experimental results show the effectiveness of our proposed method.
We discuss the beam smoothing principle of a continuous phase plate (CPP) while the input light is varying. The analysis model of the process in which the laser beam with random phase noise propagates through a CPP has been established. With this model the beam smoothing mechanism of the CPP for the laser beam with a different phase aberrations can be described. A method to optimize the smoothing result is introduced.
Recently, a very deep convolutional neural network (CNN) has achieved impressive results in image super-resolution (SR). In particular, residual learning techniques are widely used. However, the previously proposed residual block can only extract one single-level semantic feature maps of one single receptive field. Therefore, it is necessary to stack the residual blocks to extract higher-level semantic feature maps, which will significantly deepen the network. While a very deep network is hard to train and limits the representation for reconstructing the hierarchical information. Based on the residual block, we propose an enhanced multi-scale residual network (EMRN) to take advantage of hierarchical image features via dense connected enhanced multi-scale residual blocks (EMRBs). Specifically, the newly proposed residual block (EMRB) is capable of constructing multi-level semantic feature maps by a two-branch inception. The twobranch inception in our proposed EMRB consists of 2 convolutional layers and 4 convolutional layers in each branch respectively, therefore we have different ranges of receptive fields within one single EMRB. Meanwhile, the local feature fusion (LFF) is used in every EMRB to adaptively fuse the local feature maps extracted by the two-branch inception. Furthermore, global feature fusion (GFF) in EMRN is then used to obtain abundant useful features from previous EMRBs and subsequent ones in a holistic manner. Experiments on benchmark datasets suggest that our EMRN performs favorably over the state-of-the-art methods in reconstructing further superior super-resolution (SR) images.
Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method’s average metrics are: mutual information ( M I )—5.3377, feature mutual information ( F M I )—0.5600, normalized weighted edge preservation value ( Q A B / F )—0.6978 and nonlinear correlation information entropy ( N C I E )—0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.