26This paper presents a research for the use of multi-source information fusion in the field of eye 27 movement biometrics. In the current state-of-the-art, there are different techniques developed to 28 extract the physical and the behavioral biometric characteristics of the eye movements. In this work, 29we explore the effects from the multi-source fusion of the heterogeneous information extracted by 30 different biometric algorithms under the presence of diverse visual stimuli. We propose a two-stage 31 fusion approach with the employment of stimulus-specific and algorithm-specific weights for fusing 32 the information from different matchers based on their identification efficacy. The experimental 33 evaluation performed on a large database of 320 subjects reveals a considerable improvement in 34 biometric recognition accuracy, with minimal equal error rate (EER) of 5.8%, and best case Rank-1 35 identification rate (Rank-1 IR) of 88.6%. It should be also emphasized that although the concept of 36 multi-stimulus fusion is currently evaluated specifically for the eye movement biometrics, it can be 37 adopted by other biometric modalities too, in cases when an exogenous stimulus affects the extraction 38 of the biometric features. 39 40 Keywords: eye movement biometrics, multi-stimulus fusion, multi-algorithmic fusion 41 42 43 44 45 46 47 48 49 50 51 52The human body provides an invaluable source of distinctive information suitable to be used for the 54 task of biometric recognition [1]. The most well-studied and widely-adopted biometric modalities are 55 the fingerprints, the iris, and the face. Some other explored biometric traits include the palm, the hand 56 geometry, the ears, the nose, and the lips. The analysis of the blood-vessels morphology appears as 57 the main source of biometric features in methods like the vein matching, and the retinal scan. There 58 are also some biometric traits that enfold behavioral characteristics, i.e. traits that are partially 59 connected with the brain activity. Examples of this category involve the speech analysis and voice 60 recognition, the hand-written signature, keystroke dynamics, gait analysis, and the eye movement-61 driven biometrics. Considering the abundance of the existing biometric modalities and the 62 heterogeneity of the associated features, it may come as no surprise that there is a strong trend in the 63 biometric research towards the investigation and adoption of information fusion techniques. 64 Information Fusion in Biometrics 65Information fusion can provide numerous benefits in the domain of biometric recognition. The most 66 obvious among them is the expected performance gain in terms of biometric accuracy due to the 67 combination of evidence gathered from multiple cues [2]. Also, the fusion techniques can be 68 employed for the selection and the promotion of the most informative features among a large set of 69 such features [3]. In addition, the combination of different sources of biometric information can open 70 the path for the creation of b...
This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a – (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument’s real-time parser are provided for a subset of GazeBase, along with pupil area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.