Three methods are explored which help indicate whether feature points are potentially visible or occluded in the matching phase of the keyframe-based real-time visual SLAM system. The first derives a measure of potential visibility from the angular proximity to keyframes in which they were observed and globally adjusted, and preferentially selects those with high visibility when tracking the camera position between keyframes. It is found that sorting and selecting features within image bins spread over the image improves tracking stability. The second method automatically recognizes and locates 3D polyhedral objects alongside the point map, and uses them to determine occlusion. The third method uses the map points themselves to grow surfaces. The performance of each is tested on live and recorded sequences.
Biometric recognition is a critical task in security control systems. Although the face has long been widely accepted as a practical biometric for human recognition, it can be easily stolen and imitated. Moreover, in video surveillance, it is a challenge to obtain reliable facial information from an image taken at a long distance with a low-resolution camera. Gait, on the other hand, has been recently used for human recognition because gait is not easy to replicate, and reliable information can be obtained from a low-resolution camera at a long distance. However, the gait biometric alone still has constraints due to its intrinsic factors. In this paper, we propose a multimodal biometrics system by combining information from both the face and gait. Our proposed system uses a deep convolutional neural network with transfer learning. Our proposed network model learns discriminative spatiotemporal features from gait and facial features from face images. The two extracted features are fused into a common feature space at the feature level. This study conducted experiments on the publicly available CASIA-B gait and Extended Yale-B databases and a dataset of walking videos of 25 users. The proposed model achieves a 97.3 percent classification accuracy with an F1 score of 0.97and an equal error rate (EER) of 0.004.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.