Lambertian model is a classical illumination model consisting of a surface albedo component and a light intensity component. Some previous researches assume that the light intensity component mainly lies in the large-scale features. They adopt holistic image decompositions to separate it out, but it is difficult to decide the separating point between large-scale and small-scale features. In this paper, we propose to take a logarithm transform, which can change the multiplication of surface albedo and light intensity into an additive model. Then, a difference (substraction) between two pixels in a neighborhood can eliminate most of the light intensity component. By dividing a neighborhood into subregions, edgemaps of multiple scales can be obtained. Then, each edgemap is multiplied by a weight that can be determined by an independent training scheme. Finally, all the weighted edgemaps are combined to form a robust holistic feature map. Extensive experiments on four benchmark data sets in controlled and uncontrolled lighting conditions show that the proposed method has promising results, especially in uncontrolled lighting conditions, even mixed with other complicated variations.
A sparse representation classifier (SRC) and a kernel discriminant analysis (KDA) are two successful methods for face recognition. An SRC is good at dealing with occlusion, while a KDA does well in suppressing intraclass variations. In this paper, we propose kernel extended dictionary (KED) for face recognition, which provides an efficient way for combining KDA and SRC. We first learn several kernel principal components of occlusion variations as an occlusion model, which can represent the possible occlusion variations efficiently. Then, the occlusion model is projected by KDA to get the KED, which can be computed via the same kernel trick as new testing samples. Finally, we use structured SRC for classification, which is fast as only a small number of atoms are appended to the basic dictionary, and the feature dimension is low. We also extend KED to multikernel space to fuse different types of features at kernel level. Experiments are done on several large-scale data sets, demonstrating that not only does KED get impressive results for nonoccluded samples, but it also handles the occlusion well without overfitting, even with a single gallery sample per subject.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.