Abstract-Gamma correction is an interesting method for improving image quality in uncontrolled illumination conditions case. This paper presents a new technique called Mean-Variance Gamma (MV-Gamma), which is used for estimating automatically the amount of gamma correction, in the absence of any information about environmental light and imaging device. First, we valued every row and column of image pixels matrix as a random variable, where we can calculate a feature vector of means/variances of image rows and columns. After that, we applied a range of inverse gamma values on the input image, and we calculated the feature vector, for each inverse gamma value, to compare it with the target one defined from statistics of good-light images. The inverse gamma value which gave a minimum Euclidean distance between the image feature vector and the target one was selected. Experiments results, on various test images, confirmed the superiority of the proposed method compared with existing tested ones.
The scale invariant feature transform (SIFT), which was proposed by David Lowe, is a powerful method that extracts and describes local features called keypoints from images. These keypoints are invariant to scale, translation, and rotation, and partially invariant to image illumination variation. Despite their robustness against these variations, strong lighting variation is a difficult challenge for SIFT-based facial recognition systems, where significant degradation of performance has been reported. To develop a robust system under these conditions, variation in lighting must be first eliminated. Additionally, SIFT parameter default values that remove unstable keypoints and inadequately matched keypoints are not well-suited to images with illumination variation. SIFT keypoints can also be incorrectly matched when using the original SIFT matching method. To overcome this issue, the authors propose propose a method for removing the illumination variation in images and correctly setting SIFT's main parameter values (contrast threshold, curvature threshold, and match threshold) to enhance SIFT feature extraction and matching. The proposed method is based on an estimation of comparative image lighting quality, which is evaluated through an automatic estimation of gamma correction value. Through facial recognition experiments, the authors find significant results that clearly illustrate the importance of the proposed robust recognition system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.