Multimodal biometric authentication method can conquer the defects of the unimodal biometric authentication technology. In this paper, we design and develop an efficient Android-based multimodal biometric authentication system with face and voice. Considering the hardware performance restriction of the smart terminal, including the random access memory (RAM), central processing unit (CPU) and graphics processor unit (GPU), etc., which cannot efficiently accomplish the tasks of storing and quickly processing the large amount of data, a face detection method is introduced to efficiently discard the redundant background of the image and reduce the unnecessary information. Furthermore, an improved local binary pattern (LBP) coding method is presented to improve the robustness of the extracted face feature. We also improve the conventional endpoint detection technology, i.e. the voice activity detection (VAD) method, which can efficiently increase the detection accuracy of the voice mute and transition information and boost the voice matching effectiveness. To boost the authentication accuracy and effectiveness, we present an adaptive fusion strategy which organically integrates the merits of the face and voice biometrics simultaneously. The cross-validation experiments with public databases demonstrate encouraging authentication performances compared with some state-of-the-art methods. Extensive testing experiments on Android-based smart terminal show that the developed multimodal biometric authentication system achieves perfect authentication effect and can efficiently content the practical requirements.
For the past decades, recognition technologies of multispectral palmprint have attracted more and more attention due to their abundant spatial and spectral characteristics compared with the single spectral case. Enlightened by this, an innovative robust L2 sparse representation with tensor-based extreme learning machine (RL2SR-TELM) algorithm is put forward by using an adaptive image level fusion strategy to accomplish the multispectral palmprint recognition. Firstly, we construct a robust L2 sparse representation (RL2SR) optimization model to calculate the linear representation coefficients. To suppress the affection caused by noise contamination, we introduce a logistic function into RL2SR model to evaluate the representation residual. Secondly, we propose a novel weighted sparse and collaborative concentration index (WSCCI) to calculate the fusion weight adaptively. Finally, we put forward a TELM approach to carry out the classification task. It can deal with the high dimension data directly and reserve the image spatial information well. Extensive experiments are implemented on the benchmark multispectral palmprint database provided by PolyU. The experiment results validate that our RL2SR-TELM algorithm overmatches a number of state-of-the-art multispectral palmprint recognition algorithms both when the images are noise-free and contaminated by different noises.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.