Figure 1: BimodalGaze enables users to point by gaze and to seamlessly refine the cursor position with head movement. A: In Gaze Mode, the cursor (yellow) follows where the user looks but may not be sufficiently accurate. B: The pointer automatically switches into Head Mode (green) when gestural head movement is detected. C: The pointer automatically switches back into Gaze Mode when the user redirects their attention. Note that the Head Mode is only invoked when needed for adjustment of the cursor. Any natural head movement associated with a gaze shift is filtered and does not cause a mode switch.
Fig. 1. Gaze+Hold uses explicit closing of one eye to modulate gaze input from the open eye, demonstrated here with drag and drop.(1) The user looks at the interface without triggering any effects; (2) on left eye closure, the object is selected; (3) dragging is enabled via continuous gaze input from the open eye; (4) the interaction stops and the object is dropped when opening the left eye.The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions.However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user's spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations.CCS Concepts: • Human-centered computing → Interaction techniques.
Gaze interaction paradigms rely on the user needing to look at objects in the interface to select them or trigger actions. "Not looking" is an atypical and unexpected interaction to perform, but the eye-tracker can sense it. We illustrate the use of "not looking" as an interaction dynamic with examples of gaze-enabled games. We created a framework containing a spectrum of five discrete categories for this unexpected use of gaze sensing. For each category, we analyse games that use gaze interaction and make the user look away from the game action up to the extent they close their eyes. The framework is described based on whether specific game events mean the player might not; cannot; should not; must not; or does not look. Finally, we discuss the outcomes of using unexpected gaze interactions and the potential of the proposed framework as a new approach to guide the design of sensing-based interfaces. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Interaction design theory, concepts and paradigms;
Eye gaze for interaction is dependent on calibration. However, gaze calibration can deteriorate over time affecting the usability of the system. We propose to use motion matching of smooth pursuit eye movements and known motion on the display to determine when there is a drift in accuracy and use it as input for re-calibration. To explore this idea we developed Smooth-i, an algorithm that stores calibration points and updates them incrementally when inaccuracies are identified. To validate the accuracy of Smooth-i, we conducted a study with five participants and a remote eye tracker. A baseline calibration profile was used by all participants to test the accuracy of the Smooth-i re-calibration following interaction with moving targets. Results show that Smooth-i is able to manage re-calibration efficiently, updating the calibration profile only when inaccurate data samples are detected. CCS CONCEPTS • Human-centered computing → Interaction techniques;
Gaze-based interactions have found their way into the games domain and are frequently employed as a means to support players in their activities. Instead of implementing gaze as an additional game feature via a game-centred approach, we propose a diegetic perspective by introducing gaze interaction roles and gaze metaphors. Gaze interaction roles represent ambiguous mechanics in gaze, whereas gaze metaphors serve as narrative figures that symbolise, illustrate, and are applied to the interaction dynamics. Within this work, the current literature in the field is analysed to seek examples that design around gaze mechanics and follow a diegetic approach that takes roles and metaphors into account. A list of surveyed gaze metaphors related to each gaze role is presented and described in detail. Furthermore, a case study shows the potentials of the proposed approach. Our work aims at contributing to existing frameworks, such as EyePlay, by reflecting on the ambiguous meaning of gaze in games. Through this integrative approach, players are anticipated to develop a deeper connection to the game narrative via gaze, resulting in a stronger experience concerning presence (i.e., being in the game world).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.