Image haze removal is highly desired for the application of computer vision. This study proposes a novel contextguided generative adversarial network (CGGAN) for single image dehazing. Of which, a novel new encoder-decoder is employed as the generator. In addition, it consists of a feature-extraction net, a context-extraction net, and a fusion net in sequence. The feature-extraction net acts as an encoder, and is used for extracting haze features. The content-extraction net is a multi-scale parallel pyramid decoder and is used for extracting the deep features of the encoder and generating coarse dehazing image. The fusion net is a decoder and is used for obtaining the final haze-free image. In order to get better dehazing results, multi-scale information obtained during the decoding process of the context extraction decoder is used for guiding the fusion decoder. By introducing an extra coarse decoder to the original encoder-decoder, the CGGAN can make better use of the deep feature information extracted by the encoder. To ensure that the proposed CGGAN works effectively for different haze scenarios, different loss functions are employed for the two decoders. Experiments results show the advantage and the effectiveness of the proposed CGGAN, evidential improvements over existing state-of-the-art methods are obtained.
Underwater images usually suffer from colour distortion, blur, and low contrast, which hinder the subsequent processing of underwater information. To address these problems, this paper proposes a novel approach for single underwater images enhancement by integrating data‐driven deep learning and hand‐crafted image enhancement techniques. First, a statistical analysis is made on the average deviation of each channel of input underwater images to that of its corresponding ground truths, and it is found that both the red channel and the green channel of an underwater image contribute to its colour distortion. Concretely, the red channel of an underwater image is usually seriously attenuated, and the green channel is usually over strengthened. Motivated by such an observation, an attention mechanism guided residual module for underwater image colour correction is proposed, where the colour of the red channel of the underwater image and that of the green channel is compensated in a different way, respectively. Coupled with an attention mechanism, the residual module can adaptively extract and integrate the most discriminative features for colour correction. For scene contrast enhancement and scene deblurring, the traditional image enhancement techniques such as CLAHE (contrast limited adaptive histogram equalization) and Gamma correction are coupled with a multi‐scale convolutional neural network (MSCNN), where CLAHE and Gamma correction are used as complement to deal with the complex and changeable underwater imaging environment. Experiments on synthetic and real underwater images demonstrate that the proposed method performs favourably against the state‐of‐the‐art underwater image enhancement methods.
This article presents a saliency guided remote sensing image dehazing network model. It consists of the following three blocks: A dense residual based backbone network, a saliency map generator, and a deformed atmospheric scattering model (ASM) based haze removal model, of which the dense residual based backbone network is used to capture the texture detail information of a remote sensing image, the saliency map generator is used to generate the saliency map of the related remote sensing image, and the generated saliency map is used to guide the network to capture more texture details through the guided fusion module. Finally, the deformed atmospheric scattering model (ASM) is used to remove haze from remote sensing images. The model here is compared with several state‐of‐art dehazing methods on synthetic data sets and real remote sensing images. Experimental results show that on the synthetic data set, the PSNR value of this model is increased by 4.47 db and the SSIM value is increased by 0.045 compared with the best model. On real remote sensing hazy images, the visual effect of our model is also better than that of existing methods. The authors also perform experiments to demonstrate that remote sensing image dehazing is helpful for remote sensing image detection automatically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.