Proceedings of the 11th ACM Symposium on Eye Tracking Research &Amp; Applications 2019
DOI: 10.1145/3314111.3319844
|View full text |Cite
|
Sign up to set email alerts
|

A deep learning approach for robust head pose independent eye movements recognition from videos

Abstract: Recognizing eye movements is important for gaze behavior understanding like in human communication analysis (human-human or robot interactions) or for diagnosis (medical, reading impairments). In this paper, we address this task using remote RGB-D sensors to analyze people behaving in natural conditions. This is very challenging given that such sensors have a normal sampling rate of 30 Hz and provide low-resolution eye images (typically 36x60 pixels), and natural scenarios introduce many variabilities in illum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…The method integrates denoising into segmentation and performs classification on the denoised segments into fixations, saccades, smooth pursuits, and post-saccadic oscillations. It has previously been applied to noisy data to recover gaze position, and velocity estimates in experiments with complex gaze behavior [72,73]. Although the method estimates the signal's noise level and determines gaze feature parameters from human classification examples in a data-driven manner, our empirical testing showed that performance was drastically improved if the estimation was performed using at least 3 s of data.…”
Section: 2mentioning
confidence: 98%
“…The method integrates denoising into segmentation and performs classification on the denoised segments into fixations, saccades, smooth pursuits, and post-saccadic oscillations. It has previously been applied to noisy data to recover gaze position, and velocity estimates in experiments with complex gaze behavior [72,73]. Although the method estimates the signal's noise level and determines gaze feature parameters from human classification examples in a data-driven manner, our empirical testing showed that performance was drastically improved if the estimation was performed using at least 3 s of data.…”
Section: 2mentioning
confidence: 98%
“…In recent years, deep learning methods have been used to form better representations in order to improve the accuracy and robustness of general object detection [32,60,63] and keypoint detection [6,8,32,54]. Some relevant papers [2,10,40,65] explore using existing deep learning architectures on the tasks of blink or gaze estimation. Our work takes a further step to propose a method for precise keypoint detection and a unified framework designed specifically for joint eye, pupil, and blink detection.…”
Section: Pupil Detection and Blink Estimationmentioning
confidence: 99%
“…To the best of our knowledge, there are no deep learning based methods that jointly consider blink estimation and gaze estimation. However, note that Siegfried et al [41] recently proposed a deep learning based method to classify gaze streams into fixation, saccade, and blink classes.…”
Section: Related Workmentioning
confidence: 99%