Image quality enhancement aims to recover rich details from degraded images, which is applied into many fields, such as medical imaging, filming production and autonomous driving. Deep convolutional neural networks (CNNs) have enabled rapid development of image quality enhancement. However, most existing CNN-based methods lack versatility when targeting different subtasks in terms of the design of networks. Besides, they often fail to balance precise spatial representations and necessary contextual information. To deal with these problems, this paper proposes a novel unified framework for low-light image enhancement, image denoising and image super-resolution. The core of this architecture is a residual hybrid attention block (RHAB), which consists of several dynamic down-sampling modules (DDM) and hybrid attention up-sampling modules (HAUM). Specifically, multi-scale feature maps are fully interacted with each other with the help of nested subnetworks so that both high-resolution spatial details and high-level contextual information can be combined to improve the representation ability of the network. Further, a hybrid attention network (HAN) is proposed and evaluations on three separate subtasks demonstrate its good performance. Extensive experiments on the authors' synthetic dataset, a more complex version, show that the authors' method achieve better quantitative and visual results compared to other state-of-the-art methods.
Depth maps captured by traditional consumer-grade depth cameras are often noisy and low-resolution. Especially when upsampling low-resolution depth maps with large upsampling factors, the resulting depth maps tend to suffer from vague edges. To address these issues, we propose a multi-channel progressive attention fusion network that utilizes a pyramid structure to progressively recover high-resolution depth maps. The inputs of the network are the low-resolution depth image and its corresponding color image. The color image is used as prior information in this network to fill in the missing high-frequency information of the depth image. Then, an attention-based multi-branch feature fusion module is employed to mitigate the texture replication issue caused by incorrect guidance from the color image and inconsistencies between the color image and the depth map. This module restores the HR depth map by effectively integrating the information from both inputs. Extensive experimental results demonstrate that our proposed method outperforms existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.