Proceedings of the 2018 International Conference on Advanced Visual Interfaces 2018
DOI: 10.1145/3206505.3206522
|View full text |Cite
|
Sign up to set email alerts
|

VRpursuits

Abstract: Figure 1: We investigate the selection of moving 3D targets in virtual environments (A) using smooth pursuit eye movements (arrows are for illustration only and were not shown to users). We study how parameters specific to VR settings influence the performance. We then develop and evaluate two sample applications: (B) a virtual ATM where users authenticate by following the digits with their eyes, and (C) a space shooting game where users blast asteroids by following them.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
40
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 81 publications
(41 citation statements)
references
References 33 publications
1
40
0
Order By: Relevance
“…A range of works have compared eye and head pointing showing that eye gaze is faster and less strenuous, while head pointing is often preferred as more stable, controlled and accurate [5,10,18,23,44]. As in 2D contexts, eye pointing can be combined with fast manual confirmation by click or hand gesture [41,46], or with dwell time or other specific eye movement for hands-free selection [20,31,42]. In contrast to the 2D desktop setting, gaze in VR inherently involves eye-head coordination due to the wider FOV.…”
Section: Gaze Interaction In 3d Environmentsmentioning
confidence: 99%
“…A range of works have compared eye and head pointing showing that eye gaze is faster and less strenuous, while head pointing is often preferred as more stable, controlled and accurate [5,10,18,23,44]. As in 2D contexts, eye pointing can be combined with fast manual confirmation by click or hand gesture [41,46], or with dwell time or other specific eye movement for hands-free selection [20,31,42]. In contrast to the 2D desktop setting, gaze in VR inherently involves eye-head coordination due to the wider FOV.…”
Section: Gaze Interaction In 3d Environmentsmentioning
confidence: 99%
“…A range of works have compared eye and head pointing showing that eye gaze is faster and less strenuous, while head pointing is often preferred as more stable, controlled and accurate [4,12,20,31]. Eye or head pointing can be combined with fast manual confirmation by click [27,32,38], or with dwell time for hands-free selection [18,25,30]. It has also been proposed to use gaze for coarse-grained selection followed by head movement for subsequent confirmation [22,36] or refinement of positional input [20].…”
Section: Related Workmentioning
confidence: 99%
“…In particular, pursuits avoid the Midas Touch problems of fixation-based gaze techniques, as the eyes only exhibit smooth pursuit when the user attends to a moving object. A few prior works have used smooth pursuit for selection in VR, however for selection of objects presented in motion [18,30]. A distinct novelty of our work is that we instead present motion around static 3D objects to facilitate their selection by pursuit without modification to the object's size or position.…”
Section: Related Workmentioning
confidence: 99%
“…Current work on motion matching interfaces has explored a variety of ways to capture user input, with the majority relying on optical tracking. Examples include systems that track users' eyes as these follow a moving target [18,25,27,48,52]; depth-cameras that track users' hands [9,21,22]; and systems that rely on off-the-shelf web-cams to capture any input motion in their field-of-view (FOV), be it performed by the users' hands, feet, or even their heads [11]. But due to inherent limitations of computer vision, such as being restricted by their FOV (interaction space), being susceptible to changing-light conditions and occlusion, and introducing privacy concerns when used in the context of smart homes [7], recent work looks at other forms of input sensing for motion matching.…”
Section: Motion Matchingmentioning
confidence: 99%
“…Related work has developed and studied a variety of technical implementations of motion matching interaction, using webcams [11,12], depth-sensors [9], eye-trackers [18,25,37,48,52], magnets [39], and inertial measurement units (IMUs) embedded in smart-watches [50], phones [4], and AR headsets [19]. These implementations are supplemented with work on further algorithmic developments and novel deployments [10,16,21,27,29,47]. Taken together, these laboratory studies have shown that people are able to accurately interact with motion matching interfaces after a very short learning period.…”
Section: Introductionmentioning
confidence: 99%