Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &Amp; Applications 2016
DOI: 10.1145/2857491.2857532
|View full text |Cite
|
Sign up to set email alerts
|

EyeSee3D 2.0

Abstract: Figure 1: EyeSee3D analyses eye gaze on dynamic areas of interest in 3D environments. Objects and body parts can be tracked using a variety of tracking systems. Data is fused in a common 3D situation model. The example shows tracked head, hands, and gaze, as well as target stimuli of a LEGO toy kit. The user's gaze (blue) currently fixates a part of the roof (highlighted in green on the left).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Currently, the definition of the AOIs and the relative gaze mapping relies on computer vision algorithms that have still room of improvements, in terms of robustness and easy to use. Some successful attempts have been already proposed in this direction by integrating gaze data with object recognition [Benjamins et al 2018;Kurzhals et al 2017;Pfeiffer et al 2016] and machine learning algorithms [Wolf et al 2018]. The precision/accuracy of the collected gaze data represents another critical point because it varies depending on the calibration procedure and the target distance.…”
Section: Motivationmentioning
confidence: 99%
“…Currently, the definition of the AOIs and the relative gaze mapping relies on computer vision algorithms that have still room of improvements, in terms of robustness and easy to use. Some successful attempts have been already proposed in this direction by integrating gaze data with object recognition [Benjamins et al 2018;Kurzhals et al 2017;Pfeiffer et al 2016] and machine learning algorithms [Wolf et al 2018]. The precision/accuracy of the collected gaze data represents another critical point because it varies depending on the calibration procedure and the target distance.…”
Section: Motivationmentioning
confidence: 99%
“…Using these gaze vectors, it is possible to reconstruct the gaze point on real three-dimensional stimuli by intersecting one or both rays with the fixated object in space, assuming its geometry is known (Hammer et al, 2013;Maurus et al, 2014;Wang et al, 2017b). Alternatively, we can attempt to find the point where the two vectors intersect with each other in space (Hennessey and Lawrence, 2009;Maggia et al, 2013;Pfeiffer and Renner, 2014;Gutierrez Mlot et al, 2016;Pfeiffer et al, 2016), but in 3D space two gaze vectors typically do not intersect.…”
Section: Introductionmentioning
confidence: 99%