Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300842
|View full text |Cite
|
Sign up to set email alerts
|

Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(30 citation statements)
references
References 39 publications
0
29
0
Order By: Relevance
“…These methods exploit eye-head coordination implicitly as they track the compensatory eye movement during a head gesture, without need for separate head tracking. In extension, head turning has been proposed for scalar input to controls fixated by gaze [36] and 3D target disambiguation [28]. In Eye-SeeThrough, head movement controls a toolglass that can be moved over gaze-fixated targets [29].…”
Section: Combination Of Eye and Head Movementmentioning
confidence: 99%
“…These methods exploit eye-head coordination implicitly as they track the compensatory eye movement during a head gesture, without need for separate head tracking. In extension, head turning has been proposed for scalar input to controls fixated by gaze [36] and 3D target disambiguation [28]. In Eye-SeeThrough, head movement controls a toolglass that can be moved over gaze-fixated targets [29].…”
Section: Combination Of Eye and Head Movementmentioning
confidence: 99%
“…These early works demonstrated how target distance affects the VOR gain for angular horizontal movements. Recent work showed that the effect can be used for resolving target ambiguity when gaze is used for object selection in virtual reality [Mardanbegi et al 2019]. This work, in contrast, presents a fundamental investigation of VOR for gaze depth estimation for which we build on a theory developed in other fields, i.e.…”
Section: Related Workmentioning
confidence: 99%
“…The raw pupil position data was less noisy than the gaze signal for some of the recordings. As suggested in [Mardanbegi et al 2019], we also used pupil data instead of gaze data. Being able to use the pupil position makes the proposed method independent from gaze calibration.…”
Section: Vor Gain Using Pupil Centrementioning
confidence: 99%
See 2 more Smart Citations