Poor illumination greatly affects the quality of obtained images. In this paper, a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement. DEANet combines the frequency and content information of images and is divided into three subnetworks: decomposition, enhancement, and adjustment networks, which perform image decomposition; denoising, contrast enhancement, and detail preservation; and image adjustment and generation, respectively. The model is trained on the public LOL dataset, and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.
In the field of single remote sensing image Super-Resolution (SR), deep Convolutional Neural Networks (CNNs) have achieved top performance. To further enhance convolutional module performance in processing remote sensing images, we construct an efficient residual feature calibration block to generate expressive features. After harvesting residual features, we first divide them into two parts along the channel dimension. One part flows to the Self-Calibrated Convolution (SCC) to be further refined, and the other part is rescaled by the proposed Two-Path Channel Attention (TPCA) mechanism. SCC corrects local features according to their expressions under the deep receptive field, so that the features can be refined without increasing the number of calculations. The proposed TPCA uses the means and variances of feature maps to obtain accurate channel attention vectors. Moreover, a region-level nonlocal operation is introduced to capture long-distance spatial contextual information by exploring pixel dependencies at the region level. Extensive experiments demonstrate that the proposed residual feature calibration network is superior to other SR methods in terms of quantitative metrics and visual quality.
Adversarial images are able to fool the Deep Neural Network (DNN) based visual identity recognition systems, with the potential to be widely used in online social media for privacy-preserving purposes, especially in edge-cloud computing. However, most of the current techniques used for adversarial attacks focus on enhancing their ability to attack without making a deliberate, methodical, and well-researched effort to retain the perceptual quality of the resulting adversarial examples. This makes obvious distortion observed in the adversarial examples and affects users’ photo-sharing experience. In this work, we propose a method for generating images inspired by the Human Visual System (HVS) in order to maintain a high level of perceptual quality. Firstly, a novel perceptual loss function is proposed based on Just Noticeable Difference (JND), which considered the loss beyond the JND thresholds. Then, a perturbation adjustment strategy is developed to assign more perturbation to the insensitive color channel according to the sensitivity of the HVS for different colors. Experimental results indicate that our algorithm surpasses the SOTA techniques in both subjective viewing and objective assessment on the VGGFace2 dataset.
In recent years, with the increasingly serious problems of resource shortage and environmental pollution, the exploration and development of underwater clean energy were particularly important. At the same time, abundant underwater resources and species have attracted a large number of scientists to carry out research on underwater-related tasks. Due to the diversity and complexity of underwater environments, it is difficult to perform related vision tasks, such as underwater target detection and capture. The development of digital image technology has been relatively mature, and it has been applied in many fields and achieved remarkable results, but the research on underwater image processing technology is rarely effective. The underwater environment is much more complicated than that on land, and there is no light source underwater. Underwater imaging systems must rely on artificial light sources for illumination. When light travels through water, it is severely attenuated by water absorption, reflection, and scattering. The collected underwater images inevitably have problems such as limited visible range, blur, low contrast, uneven illumination, incoherent colors, and noise. The purpose of image enhancement is to improve or solve one or more of the above problems in a targeted manner. Therefore, underwater image enhancement technology has become one of the key contents of underwater image processing technology research. In this paper, we proposed a conditional generative adversarial network model based on attention U-Net which contains an attention gate mechanism that could filter invalid feature information and capture contour, local texture, and style information effectively. Furthermore, we formulate an objective function through three different loss functions, which can evaluate image quality from global content, color, and structural information. Finally, we performed end-to-end training on the UIEB real-world underwater image dataset. The comparison experiments show that our method outperforms all comparative methods, the ablation experiments show that the loss function proposed in this paper outperforms a single loss function, and finally, the generalizability of our method is verified by executing on two different datasets, UIEB and EUVP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.