The main aims of this chapter are to show the importance and role of human identification and recognition in the field of human-robot interaction, discuss the methods of person identification systems, namely traditional and biometrics systems, and compare the most commonly used biometric traits that are used in recognition systems such as face, ear, palmprint, iris, and speech. Then, by showing and comparing the requirements, advantages, disadvantages, recognition algorithms, challenges, and experimental results for each trait, the most suitable and efficient biometric trait for human-robot interaction will be discussed. The cases of human-robot interaction that require to use the unimodal biometric system and why the multimodal biometric system is also required will be discussed. Finally, two fusion methods for the multimodal biometric system will be presented and compared.
This paper proposes a 2D ear recognition approach that is based on the fusion of ear and tragus using score-level fusion strategy. An attempt to overcome the effect of partial occlusion, pose variation and weak illumination challenges is done since the accuracy of ear recognition may be reduced if one or more of these challenges are available. In this study, the effect of the aforementioned challenges is estimated separately, and many samples of ear that are affected by two different challenges concurrently are also considered. The tragus is used as a biometric trait because it is often free from occlusion; it also provides discriminative features even in different poses and illuminations. The features are extracted using local binary patterns and the evaluation has been done on three datasets of USTB database. It has been observed that the fusion of ear and tragus can improve the recognition performance compared to the unimodal systems. Experimental results show that the proposed method enhances the recognition rates by fusion of parts that are nonoccluded with tragus in the cases of partial occlusion, pose variation and weak illumination. It is observed that the proposed method performs better than feature-level fusion methods and most of the state-of-the-art ear recognition systems.
This study aims to measure the efficiency of ear and profile face in distinguishing identical twins under identification and verification modes. In addition, to distinguish identical twins by ear and profile face separately, we propose to fuse these traits with all possible binary combinations of left ear, left profile face, right ear, and right profile face. Fusion is implemented by score‐level fusion and decision‐level fusion techniques in the proposed method. Additionally, feature‐level fusion is used for comparison. All experiments in this paper are also implemented on nontwins individuals, and the recognition performance of twins and nontwins are compared. Local binary patterns, local phase quantization, and binarized statistical image features approaches are used as texture‐based descriptors for feature extraction process. Images under controlled and uncontrolled lighting are tested. Ear and profile images from ND‐TWINS‐2009‐2010 dataset are used in the experiments. The experimental results show that the proposed method is more accurate and reliable than using ear or profile face images separately. The performance of the proposed method for recognizing identical twins as recognition rate is 100% and 99.45%, and equal error rates are 0.54% and 1.63% in controlled and uncontrolled illumination conditions, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.