Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although gaze has been long studied as input modality for interaction, this has previously ignored the coordination of the eyes, head and body. This article reports a study of gaze shifts in virtual reality aimed to address the gap and inform design. We identify general eye, head and torso coordination patterns and provide an analysis of the relative movements’ contribution and temporal alignment. We quantify effects of target distance, direction and user posture, describe preferred eye-in-head motion ranges and identify a high variability in head movement tendency. Study insights lead us to propose gaze zones that reflect different levels of contribution from eye, head and body. We discuss design implications for HCI and VR, and in conclusion argue to treat gaze as multimodal input, and eye, head and body movement as synergetic in interaction design.
Figure 1: BimodalGaze enables users to point by gaze and to seamlessly refine the cursor position with head movement. A: In Gaze Mode, the cursor (yellow) follows where the user looks but may not be sufficiently accurate. B: The pointer automatically switches into Head Mode (green) when gestural head movement is detected. C: The pointer automatically switches back into Gaze Mode when the user redirects their attention. Note that the Head Mode is only invoked when needed for adjustment of the cursor. Any natural head movement associated with a gaze shift is filtered and does not cause a mode switch.
* contributed equally Figure 1. Outline Pursuits support selection in occluded 3D scenes. A: The user points at an object of interest but the selection is ambiguous due to occlusion by other objects. B: Potential targets are outlined, with each outline presenting a moving stimulus that the user can follow with their gaze. C: Matching of the user's smooth pursuit eye movement completes the selection. Note that outline pursuits can augment manual pointing as shown, or support hands-free input using the head or gaze for initial pointing.
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.
Figure 1: Radi-Eye in a smart home environment for control of appliances. A: The user turns on the lamp via a toggle selection with minimal effort using only gaze (orange) and head (red) movements. B: Selection can be expanded to subsequent headcontrolled continuous interaction to adjust the light colour via a slider. C: Gaze-triggered nested levels support a large number of widgets and easy selection of one of the multiple preset lighting modes. The widgets enabled via Radi-Eye allow a high-level of hands-free and at-a-distance control of objects from any position.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.