Biometric authentication systems offer potential advantages over traditional knowledge-based methods. However, most of the biometric systems which are extensively used lack template security and robustness. In order to address these issues, in this study, the authors have proposed a multimodal biometric system based on the combination of multiple modalities and optimal score level fusion. In addition, key features are introduced for each modality for generating cancellable biometric features. Features from individual traits are combined with corresponding key features to provide feature transformation. A robust template is generated by diffusion of individual transformed matrices using graph-based random walk cross-diffusion. In addition, individual classifier score is optimally fused using proposed multistage score level fusion model. Optimal belief masses for individual classifier are determined using cuckoo search optimisation. Wherein, optimal classifier beliefs are fused using DSmT based proportional conflict redistribution (PCR-6) rules. Experimental results demonstrate that optimal score fusion applied on cross-diffused features produce better results than existing state-of-the-art multimodal fusion schemes. On average of the outcome, equal error rate and accuracy achieved using the proposed method on four chimeric benchmarked datasets, are 2.32 and 98.316%.
Facial expression is one form of communication which being non‐verbal in nature precedes verbal communication in both origin and conception. Most of the existing methods for Automatic Facial Expression Recognition (AFER) are mainly focused on global feature extraction assuming that all facial regions contribute equal amount of discriminative information to predict the expression class. The detection and localization of facial regions that have significant contribution to expression recognition and extraction of highly discriminative feature distribution from those regions are not fully explored. The key contributions of the proposed work are developing novel feature distribution upon combining the discriminative power of shape and texture feature; determining the contribution of facial regions and identifying the prominent facial regions that hold abstract and highly discriminative information for expression recognition. The shape and texture features taken into consideration are Local Phase Quantization (LPQ), Local Binary Pattern (LBP), and Histogram of Oriented Gradients (HOG). Multiclass Support Vector Machine (MSVM) is used while one versus one classification. The proposed work is implemented on CK+, KDEF, and JAFFE benchmark facial expression datasets. The recognition rate of the proposed work is 94.2% on CK+ and 93.7% on KDEF, which is significantly more than the existing handcrafted feature‐based methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.