Proceedings of the 11th ACM Symposium on Eye Tracking Research &Amp; Applications 2019
DOI: 10.1145/3314111.3319914
|View full text |Cite
|
Sign up to set email alerts
|

Improving real-time CNN-based pupil detection through domain-specific data augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 21 publications
0
24
0
Order By: Relevance
“…The ideal solution would be to measure a full set of pupillometric features from the RGB cameras embedded on the robotic platform. Recent findings suggest that this approach could be feasible [43][44][45][46]74]; hence, we look forward to removing this limitation, making the system completely non-intrusive. Other than due to cognitive load changes, pupil dilation tends to be affected by other factors like excitement, stress, and environmental light conditions.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The ideal solution would be to measure a full set of pupillometric features from the RGB cameras embedded on the robotic platform. Recent findings suggest that this approach could be feasible [43][44][45][46]74]; hence, we look forward to removing this limitation, making the system completely non-intrusive. Other than due to cognitive load changes, pupil dilation tends to be affected by other factors like excitement, stress, and environmental light conditions.…”
Section: Discussionmentioning
confidence: 99%
“…Both, mobile head mounted [39,40], and remote eye tracker [41,42] devices have been used as minimally invasive methods to measure pupillometric features, more appropriate for real-world scenarios. Recent research showed the possibility to measure TEPRs from RGB cameras, suitable for robotic platforms, making pupillometry a promising candidate to detect lies in real-life human-robot interactions [43][44][45][46].…”
Section: Introductionmentioning
confidence: 99%
“…Since then, several approaches like (Fuhl et al, 2015(Fuhl et al, , 2016Santini et al, 2018aSantini et al, , 2018b were proposed for robust real-time pupil detection in challenging natural environments like driving and walking. Current state-ofthe-art approach (Eivazi et al, 2019) reported a pupil detection rate of ~85% on PupilNet (Fuhl et al, 2017) and LPW (Tonsen et al, 2016) datasets and a detection rate of ~74% on Swirski (Świrski et al, 2012) dataset. Yet, the performance of these approaches in terms of gaze estimation in similar challenging environments with such pupil detection accuracies is still unknown.…”
Section: Related Workmentioning
confidence: 99%
“…Eivazi et al [13] augmented a dataset of real eye images by recording scene re ections o the anterior side of black coated glasses and superimposing them on the real eye imagery. To achieve similar re ections, we incorporate a 3 mm thick eyeglasses with black frames.…”
Section: Rendered Datasetsmentioning
confidence: 99%
“…Both models are trained using the loss function strategy proposed in RITnet [9]. is strategy involves using a weighted combination of four loss functions: Image Augmentation: Image augmentation aids in broadening the statistical distribution of information content and combats overing [13]. Previous e orts [9,27] have shown that data augmentation on eye images improves the performance of convolutional networks under naturalistic conditions such as varying contrast, eye makeup, eyeglasses, multiple re ections and image distortions.…”
Section: Model Architecturementioning
confidence: 99%