Images captured in a low-light environment are strongly influenced by noise and low contrast, which is detrimental to tasks such as image recognition and object detection. Retinex-based approaches have been continuously explored for low-light enhancement. Nevertheless, Retinex decomposition is a highly ill-posed problem. The estimation of the decomposed components should be combined with proper constraints. Meanwhile, the noise mixed in the low-light image causes unpleasant visual effects. To address these problems, we propose a Constraint Low-Rank Approximation Retinex model (CLAR). In this model, two exponential relative total variation constraints were imposed to ensure that the illumination is piece-wise smooth and that the reflectance component is piece-wise continuous. In addition, the low-rank prior was introduced to suppress the noise in the reflectance component. With a tailored separated alternating direction method of multipliers (ADMM) algorithm, the illumination and reflectance components were updated accurately. Experimental results on several public datasets verify the effectiveness of the proposed model subjectively and objectively.
Medical imaging technology plays a crucial role in the diagnosis and treatment of diseases. However, the captured medical images are often in a low resolution (LR) due to the limited imaging condition. Super-resolution (SR) technology is a feasible solution to enhance the resolution of a medical image without increasing the hardware cost. However, the existing SR methods often ignore high-frequency details, which results in blurred edges and an unsatisfying visual perception. In this paper, a gated multi-attention feedback network (GAMA) is proposed for medical image SR. Specifically, a gated multi-feedback network is employed as the backbone to extract hierarchical features. Meanwhile, a layer attention feature extraction (LAFE) module is introduced to refine the feature map. In addition, a channel-space attention reconstruction (CSAR) module is built to enhance the representational ability of the semantic feature map. Furthermore, a gradient variance loss is tailored as the regularization in guiding the model learning to regularize the model in generating a faithful high-resolution image with rich textures and sharp edges. The experiments verify the effectiveness of the proposed GAMA compared with the state-of-the-art approaches.
Image fusion plays a significant role in computer vision since numerous applications benefit from the fusion results. The existing image fusion methods are incapable of perceiving the most discriminative regions under varying illumination circumstances and thus fail to emphasize the salient targets and ignore the abundant texture details of the infrared and visible images. To address this problem, a multiscale aggregation and illumination‐aware attention network (MAIANet) is proposed for infrared and visible image fusion. Specifically, the MAIANet consists of four modules, namely multiscale feature extraction module, lightweight channel attention module, image reconstruction module, and illumination‐aware module. The multiscale feature extraction module attempts to extract multiscale features in the images. The role of the lightweight channel attention module is to assign different weights to each channel so as to focus on the essential regions in the infrared and visible images. An illumination‐aware module is employed to assess the probability distribution regarding the illumination factor. Meanwhile, an illumination perception loss is formulated by the illumination probabilities to enable the proposed MAIANet to better adjust to the changes in illumination. Experimental results on three datasets, that is, MSRS, TNO, and RoadSence, verify the effectiveness of the MAIANet in both qualitative and quantitative evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.