We propose a view-based approach to recognize humans from their gait. Two different image features have been considered: the width of the outer contour of the binarized silhouette of the walking person and the entire binary silhouette itself. To obtain the observation vector from the image features, we employ two different methods. In the first method, referred to as the indirect approach, the high-dimensional image feature is transformed to a lower dimensional space by generating what we call the frame to exemplar (FED) distance. The FED vector captures both structural and dynamic traits of each individual. For compact and effective gait representation and recognition, the gait information in the FED vector sequences is captured in a hidden Markov model (HMM). In the second method, referred to as the direct approach, we work with the feature vector directly (as opposed to computing the FED) and train an HMM. We estimate the HMM parameters (specifically the observation probability B) based on the distance between the exemplars and the image features. In this way, we avoid learning high-dimensional probability density functions. The statistical nature of the HMM lends overall robustness to representation and recognition. The performance of the methods is illustrated using several databases.
Abstract.Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-based approach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a person's gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.
Human gait is a spatio-temporal phenomenon and typifies the motion characteristics of an individual. The gait of a person is easily recognizable when extracted from a sideview of the person. Accordingly, gait-recognition algorithms work best when presented with images where the person walks parallel to the camera (i.e. the image plane). However, it is not realistic to expect that this assumption will be valid in most real-life scenarios. Hence it is important to develop methods whereby the side-view can be generated from any other arbitrary view in a simple, yet accurate, manner. That is the main theme of this paper. We show that if the person is far enough from the camera, it is possible to synthesize a side view (referred to as canonical view) from any other arbitrary view using a single camera. Two methods are proposed for doing this: i) by using the perspective projection model, and ii) by using the optical flow based structure from motion equations. A simple camera calibration scheme for this method is also proposed. Examples of synthesized views are presented. Preliminary testing with gait recognition algorithms gives encouraging results. A by-product of this method is a simple algorithm for synthesizing novel views of a planar scene.
Identification of humans from arbitrary view points is an important requirement for different tasks including perceptual interfaces for intelligent environments, covert security and access control etc. For optimal performance, the system must use as many cues as possible and combine them in meaningful ways. In this paper we present fusion of face and gait cues for the single camera case. We employ a view invariant gait recognition algorithm for gait recognition. A sequential importance sampling based algorithm is used for probabilistic face recognition from video. We employ decision fusion to combine the results of our gait recognition algorithm and the face recognition algorithm. We consider two fusion scenarios: hierarchical and holistic. The first involves using the gait recognition algorithm as a filter to pass on a smaller set of candidates to the face recognition algorithm. The second involves combining the similarity scores obtained individually from the face and gait recognition algorithms Simple rules like the SUM, MIN and PRODUCT are used for combinining the scores. The results of the fusion are demonstrated on the NIST database which has outdoor gait and face data of 30 subjects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.