When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user's current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one-and two-directional foot pedals) that can be seamlessly combined with gaze input.
Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.