Complex lighting conditions and the limited dynamic range of imaging devices result in captured images with ill exposure and information loss. Existing image enhancement methods based on histogram equalization, Retinex-inspired decomposition model, and deep learning suffer from manual tuning or poor generalization. In this work, we report an image enhancement method against ill exposure with self-supervised learning, enabling tuning-free correction. First, a dual illumination estimation network is constructed to estimate the illumination for under- and over-exposed areas. Thus, we get the corresponding intermediate corrected images. Second, given the intermediate corrected images with different best-exposed areas, Mertens’ multi-exposure fusion strategy is utilized to fuse the intermediate corrected images to acquire a well-exposed image. The correction-fusion manner allows adaptive dealing with various types of ill-exposed images. Finally, the self-supervised learning strategy is studied which learns global histogram adjustment for better generalization. Compared to training on paired datasets, we only need ill-exposed images. This is crucial in cases where paired data is inaccessible or less than perfect. Experiments show that our method can reveal more details with better visual perception than other state-of-the-art methods. Furthermore, the weighted average scores of image naturalness matrics NIQE and BRISQUE, and contrast matrics CEIQ and NSS on five real-world image datasets are boosted by 7%, 15%, 4%, and 2%, respectively, over the recent exposure correction method.
Poor lighting conditions in the real world may lead to ill-exposure in captured images which suffer from compromised aesthetic quality and information loss for post-processing. Recent exposure correction works address this problem by learning the mapping from images of multiple exposure intensities to well-exposed images. However, it requires a large number of paired training data, which is hard to implement for certain data-inaccessible scenarios. This paper presents a highly robust exposure correction method based on self-supervised learning. Specifically, two sub-networks are designed to deal with under- and over-exposed regions in ill-exposed images respectively. This hybrid architecture enables adaptive ill-exposure correction. Then, a fusion module is employed to fuse the under-exposure corrected image and the over-exposure corrected image to obtain a well-exposed image with vivid color and clear textures. Notably, the training process is guided by histogram-equalized images with the application of histogram equalization prior (HEP), which means that the presented method only requires ill-exposed images as training data. Extensive experiments on real-world image datasets validate the robustness and superiority of this technique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.