ACM Symposium on Applied Perception 2020 2020
DOI: 10.1145/3385955.3407935
|View full text |Cite
|
Sign up to set email alerts
|

RIT-Eyes: Rendering of near-eye images for eye-tracking applications

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…photo-realistic gaze and eye contact reconstruction Nair et al 2020;Schwartz et al 2020;Whitmire et al 2016;Wood et al 2016], eye editing for portrait images [Shu et al 2017], and retinal imaging [Huang et al 2014;Swedish et al 2015]. However, few works pay attention to eyelashes.…”
Section: Related Workmentioning
confidence: 99%
“…photo-realistic gaze and eye contact reconstruction Nair et al 2020;Schwartz et al 2020;Whitmire et al 2016;Wood et al 2016], eye editing for portrait images [Shu et al 2017], and retinal imaging [Huang et al 2014;Swedish et al 2015]. However, few works pay attention to eyelashes.…”
Section: Related Workmentioning
confidence: 99%
“…In general, approaches to eye segmentation [3,24,23,2,38,42] leverage deep convolutional neural networks [27,26,25] (CNNs) and, consequently, require large eye datasets in order to train these neural models effectively. The requisite datasets can be collected by recording synthetic information from simulations [35] or from human subjects directly [9,50]. Although, real-world eye datasets, such as OpenEDS [9] or MPIIGaze [50], provide invaluable samples of data/images to train the CNN models on, constructing such datasets requires a great deal of human annotation effort (high labeling burden) as well as introduces potential human subject image privacy issues.…”
Section: Introductionmentioning
confidence: 99%
“…Although, real-world eye datasets, such as OpenEDS [9] or MPIIGaze [50], provide invaluable samples of data/images to train the CNN models on, constructing such datasets requires a great deal of human annotation effort (high labeling burden) as well as introduces potential human subject image privacy issues. In contrast, synthetic eye datasets circumvent these issues, reducing the data collection effort inherent to working with actual human participants as well as manual labeling work needed to generate ground truth segmentation masks [35]. As a result, generating datasets of synthetic data samples offers the potential to train powerful eye-tracking CNN-based systems at greatly reduced overall cost.…”
Section: Introductionmentioning
confidence: 99%
“…Ray-tracing operations are then used to simulate the position of features used by an eye-tracker such as the pupil and glints of the user on the image sensor of a camera model. Contemporary simulation environments also include realistic head models which allow the simulation of synthetic images of the entire eye region ( 32 ; 19 , 22 ).…”
Section: Introductionmentioning
confidence: 99%