If it is the author's pre-published version, changes introduced as a result of publishing processes such as copy-editing and formatting may not be reflected in this document. For a definitive version of this work, please refer to the published version.
We propose HeadGesture, a hands-free input approach to interact with Head Mounted Display (HMD) devices. Using HeadGesture, users do not need to raise their arms to perform gestures or operate remote controllers in the air. Instead, they perform simple gestures with head movement to interact with the devices. In this way, users' hands are free to perform other tasks, e.g., taking notes or manipulating tools. This approach also reduces the hand occlusion of the field of view [11] and alleviates arm fatigue [7]. However, one main challenge for HeadGesture is to distinguish the defined gestures from unintentional movements. To generate intuitive gestures and address the issue of gesture recognition, we proceed through a process of Exploration - Design - Implementation - Evaluation. We first design the gesture set through experiments on gesture space exploration and gesture elicitation with users. Then, we implement algorithms to recognize the gestures, including gesture segmentation, data reformation and unification, feature extraction, and machine learning based classification. Finally, we evaluate user performance of HeadGesture in the target selection experiment and application tests. The results demonstrate that the performance of HeadGesture is comparable to mid-air hand gestures, measured by completion time. Additionally, users feel significantly less fatigue than when using hand gestures and can learn and remember the gestures easily. Based on these findings, we expect HeadGesture to be an efficient supplementary input approach for HMD devices.
We propose HeadCross, a head-based interaction method to select targets on VR and AR head-mounted displays (HMD). Using HeadCross, users control the pointer with head movements and to select a target, users move the pointer into the target and then back across the target boundary. In this way, users can select targets without using their hands, which is helpful when users' hands are occupied by other tasks, e.g., while holding the handrails. However, a major challenge for head-based methods is the false positive problems: unintentional head movements may be incorrectly recognized as HeadCross gestures and trigger the selections. To address this issue, we first conduct a user study (Study 1) to observe user behavior while performing HeadCross and identify the behavior differences between HeadCross and other types of head movements. Based on the results, we discuss design implications, extract useful features, and develop the recognition algorithm for HeadCross. To evaluate HeadCross, we conduct two user studies. In Study 2, we compared HeadCross to the dwell-based selection method, button-press method, and mid-air gesture-based method. Two typical target selection tasks (text entry and menu selection) are tested on both VR and AR interfaces. Results showed that compared to the dwell-based method, HeadCross improved the sense of control; and compared to two hand-based methods, HeadCross improved the interaction efficiency and reduced fatigue. In Study 3, we compared HeadCross to three alternative designs of head-only selection methods. Results show that HeadCross was perceived to be significantly faster than the alternatives. We conclude with the discussion on the interaction potential and limitations of HeadCross.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.