In still-to-video face recognition (FR), the faces captured with surveillance cameras are matched against reference stills of target individuals enrolled to the system. FR is a challenging problem in video surveillance due to uncontrolled capture conditions (variations in pose, expression, illumination, blur, scale, etc.), and the limited number of reference stills to model target individuals. This paper introduces a new approach to generate multiple synthetic face images per reference still based on camera-specific capture conditions to deal with illumination variations. For each reference still, a diverse set of faces from non-target individuals appearing in the camera viewpoint are selected based on luminance and contrast distortion. These face images are then decomposed into detail layer and large scale layer using an edge-preserving image decomposition to obtain their illumination dependent component. Finally, the large scale layers of these images are morphed with each reference still image to generate multiple synthetic reference stills that incorporate illumination and contrast conditions. Experimental results obtained with the ChokePoint dataset reveal that these synthetic faces produce an enhanced face model. As the number of synthetic faces grows, the proposed approach provides a higher level of accuracy and robustness across a range of capture conditions.
In video surveillance, face recognition (FR) systems are employed to detect individuals of interest appearing over a distributed network of cameras. The performance of still-tovideo FR systems can decline significantly because faces captured in unconstrained operational domain (OD) over multiple video cameras have a different underlying data distribution compared to faces captured under controlled conditions in the enrollment domain with a still camera. This is particularly true when individuals are enrolled to the system using a single reference still. To improve the robustness of these systems, it is possible to augment the reference set by generating synthetic faces based on the original still. However, without the knowledge of the OD, many synthetic images must be generated to account for all possible capture conditions. FR systems may, therefore, require complex implementations and yield lower accuracy when training on many less relevant images. This paper introduces an algorithm for domain-specific face synthesis (DSFS) that exploits the representative intra-class variation information available from the OD. Prior to operation (during camera calibration), a compact set of faces from unknown persons appearing in the OD is selected through affinity propagation clustering in the captured condition space (defined by pose and illumination estimation). The domainspecific variations of these face images are then projected onto the reference still of each individual by integrating an imagebased face relighting technique inside the 3-D reconstruction framework. A compact set of synthetic faces is generated that resemble individuals of interest under the capture conditions relevant to the OD. In a particular implementation based on sparse representation classification, the synthetic faces generated with the DSFS are employed to form a cross-domain dictionary that accounts for structured sparsity, where the dictionary blocks combine the original and synthetic faces of each individual. Experimental results obtained with videos from the Chokepoint and COX-S2V data sets reveal that augmenting the reference gallery set of still-to-video FR systems using the proposed DSFS approach can provide a significantly higher level of accuracy compared with the state-of-the-art approaches, with only a moderate increase in its computational complexity.Index Terms-Face recognition, single sample per person, face synthesis, 3D face reconstruction, illumination transferring, sparse representation-based classification, video surveillance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.