RESUMENous présentons une technique d'interaction pour dispositifs mobiles (smartphone et tablette) basée sur le suivi du visage de l'utilisateur. Cette technique définit de nouvelles possibilités pour l'interaction en entrée et en sortie sur dispositifs mobiles. En sortie, le suivi de la tête peut permettre de contrôler le point de vue sur une scène 3D affichée à l'écran (Head-Coupled Perspective, HCP). Cette technique améliore l'interaction en sortie en offrant la perception de la profondeur et en permettant la visualisation d'un espace de travail plus grand (fenêtre virtuelle). En entrée, le suivi des mouvements de la tête définit une nouvelle modalité d'interaction qui ne requiert pas d'autres capteurs que la caméra du téléphone ou de la tablette. Dans cet article, nous explicitons les possibilités interactionnelles offertes par le suivi de la tête de l'utilisateur sur téléphones ou tablettes, particulièrement adapté au caractère mobile des dispositifs visés. Nous focalisons ensuite sur l'interaction en sortie en présentant plusieurs applications du HCP et en décrivant les résultats d'une expérimentation qualitative sur téléphone et tablette. ABSTRACTWe study interaction modalities for mobile devices (smartphones and tablets) that rely on a camera-based head tracking. This technique defines new possibilities for input and output interaction. For output, by computing the position of the device according to the user's head, it is for example possible to realistically control the viewpoint on a 3D scene (Head-Coupled Perspective, HCP). This technique improves the output interaction bandwidth by enhancing the depth perception and by allowing the visualization of large workspaces (virtual window). For input, head movement can be used as a means of interacting with a mobile device. Moreover such an input modality does not require any additional sensor except the built-in front-facing camera. In this paper, we classify the interaction possibilities offered by head tracking on smartphones and tablets. We then focus on the output interaction by introducing several applications of HCP on both smartphones and tablets and by presenting the results of a qualitative user experiment.
This paper presents the design and evaluation of the Wavelet menu and its implementation on the iPhone. The Wavelet menu consists of a concentric hierarchical Marking menu using simple gestures. The novice mode, i.e. when the menu is displayed, is well adapted to the limited screen space of handheld devices because the representation of the menu hierarchy is inverted, the deeper submenu being always displayed at the center of the screen. The visual design is based on a stacking metaphor to reinforce the perception of the hierarchy and to help users to quickly understand how the technique works. The menu also supports submenu previsualization, a key property to navigate efficiently in a hierarchy of commands. The quantitative evaluation shows that the Wavelet menu provides an intuitive way for supporting efficient gesture-based navigation. The expert mode, i.e. gesture without waiting for the menu to pop-up, is another key property of the Wavelet menu: By providing stroke shortcuts, the Wavelet favors the selection of frequent commands in expert mode and makes eyes-free selection possible. A user experiment shows that participants are able to select commands, eyes-free, while walking.
Exploration and navigation in multimedia data hierarchies (e.g., photos, music) are frequent tasks on mobile devices. However, visualization and interaction are impoverished due to the limited size of the screen and the lack of precise input devices. As a result, menus on mobile devices do not provide efficient navigation as compared to many innovative menu techniques proposed for Desktop platforms. In this paper, we present Wavelet, the adaptation of the Wave menu for the navigation in multimedia data on iPhone. Its layout, based on an inverted representation of the hierarchy, is particularly well adapted to mobile devices. Indeed, it guarantees that submenus are always displayed on the screen and it supports efficient navigation by providing previsualization of the submenus.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.