2019
DOI: 10.16910/jemr.12.6.4
|View full text |Cite
|
Sign up to set email alerts
|

Motion tracking of iris features to detect small eye movements

Abstract: The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 55 publications
(78 reference statements)
0
6
0
Order By: Relevance
“…Image data is encapsulated within a processing unit, reducing the chance that a malicious user can gain access. However, this also restricts applications that may utilize the iris for improved gaze estimation [14], realistic rendering of the user's eye [26], and iris authentication in cases where it is desired, such as logging into the Microsoft Hololens 2.…”
Section: Threat Modelmentioning
confidence: 99%
“…Image data is encapsulated within a processing unit, reducing the chance that a malicious user can gain access. However, this also restricts applications that may utilize the iris for improved gaze estimation [14], realistic rendering of the user's eye [26], and iris authentication in cases where it is desired, such as logging into the Microsoft Hololens 2.…”
Section: Threat Modelmentioning
confidence: 99%
“…Gaze estimation algorithms which solely rely on hand-crafted features are particularly susceptible to stray reflections (unanticipated patterns on eye imagery) and occlusion of descriptive gaze regions (such as eyelid covering the pupil or iris). Recent appearance-based methods based on Convolutional Neural Networks (CNNs) are better able to extract reasonably reliable gaze features despite the presence of reflections [3] or occlusions [31]. Additionally, for head-mounted eye-tracking systems, the degradation of gaze estimate accuracy over time due to slippage [25] can be minimized by estimating the 3D eyeball center of rotation [40] (loosely referred at as an "eyeball fit").…”
Section: Introductionmentioning
confidence: 99%
“…These elliptical fits are derived from identified pupil and iris segments. Efforts by Chaudhary et al [3] and Wu et al [47] demonstrate that CNNs can precisely segment eye images into its constituent parts, i.e., the pupil, iris, sclera and background skin regions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In an effort to engage the machine learning and eyetracking communities in the field of eye-tracking for headmounted displays (HMD), Facebook Reality Labs issued the Open Eye Dataset (OpenEDS) Semantic Segmentation challenge which addresses part of the gaze estimation pipeline: identifying different regions of interest (e.g., pupil, iris, sclera, skin) in close-up images of the eye. Such tion of region-specific features (e.g., iridial feature tracking [2])and mathematical models which summarize the region structures (e.g., iris ellipse [17,1,13], or pupil ellipse [7]) used to derive a measure of gaze orientation.…”
Section: Introductionmentioning
confidence: 99%