2012
DOI: 10.7763/ijcee.2012.v4.470
|View full text |Cite
|
Sign up to set email alerts
|

Feature Level Fusion of Palm and Face for Secure Recognition

Abstract: Abstract-Biometric user authentication techniques for security and access control have evoked an enormous interest by science, industry and society in the last two decades. But even the best single biometric system suffers from spoof attacks, intra-class variability, noise, susceptibility etc. In the realm of biometrics, the consolidation of evidence presented by multiple biometric sources is an effective way of enhancing the recognition accuracy of an authentication system. This paper proposes an authenticati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Krishneswari and Arumugam prescribed a multimodal method by combining palmprint and fngerprint traits [9]. Bokade and Sapkal reviewed and discussed the feature-level fusion of palmprint and face traits [10]. Bhagat et al developed a multimodal biometric method by joining face and palm vein traits [11].…”
Section: Related Workmentioning
confidence: 99%
“…Krishneswari and Arumugam prescribed a multimodal method by combining palmprint and fngerprint traits [9]. Bokade and Sapkal reviewed and discussed the feature-level fusion of palmprint and face traits [10]. Bhagat et al developed a multimodal biometric method by joining face and palm vein traits [11].…”
Section: Related Workmentioning
confidence: 99%
“…The average accuracy of fusion classification lies 97-99% where as FAR and FRR lies in between 0.01 and 0.06. The result section has compared the proposed results with other biometric fusion works (Rattani et al, 2007;Bokade & Sapkal, 2012;Nadheen & Poornima, 2013;Dhameliya & Chaudhari, 2013;Veluchamy & Karlmarx, 2016). It is not necessary that each researcher has taken the same biometric sample or same dataset.…”
Section: Accuracymentioning
confidence: 99%
“…Malode and Sahare (2017) used MFCC for feature extraction and Hidden Markov Model (HMM) for classification in speech recognition system. It has been empirically proven in many publications (Chetty & Wagner, 2005;Chen & Chu, 2006;Rattani et al, 2007;Zhang et al, 2007;Rattani & Tistarelli, 2009;Almayyan et al, 2011;Liau, & Isa, 2011;Bokade & Sapkal, 2012;Park & Kim, 2013;Nadheen & Poornima, 2013;Dhameliya & Chaudhari, 2013;Eskandari et al, 2014;Saleh & Alzoubiady, 2014, Veluchamy & Karlmarx, 2016Haghighat, 2016;Sarhan et al, 2017;Leghari et al, 2018;Carol & Fred, 2018;Supreetha Gowda et al, 2018) that multimodal biometrics systems improve the recognition accuracy by integrating complementary information over unimodal biometrics systems. Features represent rich information about biometrics; fusion at feature level is believed to be bestowing better performance (Veluchamy & Karlmarx, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…The information from these sources can be fused at different stages of the biometric recognition system. The fusion can be done before the matching: at sensor-level [5], [6] or at feature-level [7]- [9], or after the matching: at score-level [10]- [12], ranklevel [13]- [15] or at decision-level [16]- [19]. Nevertheless, the use of multiple sensors system raises the cost of the biometric recognition system [4].…”
Section: Introductionmentioning
confidence: 99%