2009
DOI: 10.6028/nist.ir.7607
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the multiple biometrics grand challenge

Abstract: Recent studies show that face recognition in uncontrolled images remains a challenging problem, although the reasons why are less clear. Changes in illumination are one possible explanation, although algorithms developed since the advent of the PIE and Yale B data bases supposedly compensate for illumination variation. Edge density has also been shown to be a strong predictor of algorithm failure on the FRVT 2006 uncontrolled images: recognition is harder on images with higher edge density. This paper presents… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(25 citation statements)
references
References 5 publications
0
25
0
Order By: Relevance
“…We use three publicly available iris databases in all experiments: the Chinese Academy of Sciences Casia 2.0 (device 1) [31], the US National Institute of Standards and Technology (NIST) ''Iris Challenge Evaluation'', experiment 1 (right eye), (ICE-1) [32], and the NIST ''Multiple Biometric Grand Challenge'' (MBGC), Portal Challenge, experiment 3, left eye (MBGC-3l) [33]. Their properties are summarized in Table 3.…”
Section: Resultsmentioning
confidence: 99%
“…We use three publicly available iris databases in all experiments: the Chinese Academy of Sciences Casia 2.0 (device 1) [31], the US National Institute of Standards and Technology (NIST) ''Iris Challenge Evaluation'', experiment 1 (right eye), (ICE-1) [32], and the NIST ''Multiple Biometric Grand Challenge'' (MBGC), Portal Challenge, experiment 3, left eye (MBGC-3l) [33]. Their properties are summarized in Table 3.…”
Section: Resultsmentioning
confidence: 99%
“…To illustrate the effectiveness of our method, we present experimental results on three publicly available datasets for video-based face recognition: the Multiple Biomertic Grand Challenge (MBGC) [18], [19], the Face and Ocular Challenge Series (FOCS) [20], [21], and the Honda/UCSD datasets [7]. Figure 2(a) shows example frames from four different activity sequences, where each subject reads from a paper, and the sequences consists of non-frontal views of the subject.…”
Section: Resultsmentioning
confidence: 99%
“…In the MBGC [18] protocol, verifications are specified by two sets: target and query. The protocol requires the algorithm to match each target sequence with all query sequences.…”
Section: Mbgc Walking Videosmentioning
confidence: 99%
“…Similarly for the right eye, the Rank-1 [26] recognition rates for the iris, periocular regions, and their combination were 10.1, 88.7, and 92.4%, respectively. Even though higher iris recognition performance was reported for the MBGC portal chalenge [19], the target images used in these experiments were still images of relatively higher quality (and of a significatly larger size). In this work, both the target and the query images are from the NIR face videos and as described earlier, of highly non-ideal in nature.…”
Section: Uniqueness Of Periocular Featuresmentioning
confidence: 98%
“…Alternatively, robust algorithms can be developed to handle specific challenges such as pose or illumination invariant face recognition, or iris recognition in off-axis images. Another option is to make use of additional information to enhance the recognition performance such as the use of multi-biometrics, i.e., fusion of different biometric modalities (for example, fusing face and iris recognition scores [19]). A related idea is to explore newer traits that can aid the existing biometric modalities.…”
Section: Introductionmentioning
confidence: 99%