2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851961
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Learning of Eye Gaze Representation from the Web

Abstract: Automatic eye gaze estimation has interested researchers for a while now. In this paper, we propose an unsupervised learning based method for estimating the eye gaze region. To train the proposed network "Ize-Net" in selfsupervised manner, we collect a large 'in the wild' dataset containing 1,54,251 images from the web. For the images in the database, we divide the gaze into three regions based on an automatic technique based on pupil-centers localization and then use a feature-based technique to determine the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 44 publications
0
21
0
Order By: Relevance
“…Eye image [26] - [42], [119], [113] [38], [47], [48], [95], [120] [40], [46], [51], [98], [100], [115], [121] [49], [50], [79], [52], [53], [80], [54] -Facial image - [35] [43], [64], [101] [66], [68], [104], [108] [5], [46], [69], [92], [99], [110], [111], [116], [118], [122] [49], [50], [67], [59], [61], [83], [62], [123], [78], [124], [63], [112], [82], [37], [107] [60]…”
Section: Featurementioning
confidence: 99%
See 2 more Smart Citations
“…Eye image [26] - [42], [119], [113] [38], [47], [48], [95], [120] [40], [46], [51], [98], [100], [115], [121] [49], [50], [79], [52], [53], [80], [54] -Facial image - [35] [43], [64], [101] [66], [68], [104], [108] [5], [46], [69], [92], [99], [110], [111], [116], [118], [122] [49], [50], [67], [59], [61], [83], [62], [123], [78], [124], [63], [112], [82], [37], [107] [60]…”
Section: Featurementioning
confidence: 99%
“…[43], [64], [42], [101], [113], [119] [38], [47], [66], [68], [95], [104], [108], [120] [5], [36], [40], [46], [70], [71], [72], [98], [99], [100], [115], [116], [118], [121], [122] [82], [50], [49], [37], [59], [61], [83], [62], [123], [78], [79], [124], [52], [53], [80], [63], [125], [107] -Semi-/Self-/Un-Supervised CNN --- [48] [51], [69], [92], [110], [111] [54], [112], [37...…”
Section: Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods face several challenges, which include partial occlusion of the iris by the eyelid, illumination condition, head pose, specular reflection in case the user wears glasses, the inability to use standard shape fitting for iris boundary detection, and other effects including motion blur and over saturation of image [72]. To deal with these challenges, most of the existing gaze estimation [23], [25], [48]- [56] [28], [48], [60]- [65] [57]- [59], [70], [74] HMD 2-5 55°-75°I ndependent (Leanback, Sitting, Upright) [31], [33], [75], [76] Automotive 50 40°-60°M obile, Sitting, Upright [39], [77]- [83] Handheld 20-40 5°-12°L eanfwd, Sitting, Standing, Mobile [24], [62], [84]- [87] ET/ FV --Leanfwd, Sitting, Standing, Upright [26], [88] methods have been performed under constrained environments like fixation of head pose, controlled illumination conditions, and camera angle. Such methods require a huge dump of highresolution labeled images.…”
Section: Gaze Estimation: Problem Settingmentioning
confidence: 99%
“…Although, the performance enhancement comes with the cost of large scale annotated data, which is expensive to acquire. Recently, deep learning with limited annotation has gained increasing popularity [26]- [28].…”
Section: Introductionmentioning
confidence: 99%