The popularity of augmented reality (AR) applications on mobile devices is increasing, but there is as yet little research on their use in real-settings. We review data from two pioneering field trials where MapLens, a magic lens that augments paper-based city maps, was used in small-group collaborative tasks. The first study compared MapLens to a digital version akin to Google Maps, the second looked at using one shared mobile device vs. using multiple devices. The studies find place-making and use of artefacts to communicate and establish common ground as predominant modes of interaction in AR-mediated collaboration with users working on tasks together despite not needing to.
In this paper we report on our experience on the design and evaluation of multimodal user interfaces in various contexts. We introduce a novel combination of existing design and evaluation methods in the form of a 5-step iterative process and show the feasibility of this method and some of the lessons learned through the design of a messaging application for two contexts (in car, walking). The iterative design process we employed included the following five basic steps: 1) identification of the limitations affecting the usage of different modalities in various contexts (contextual observations and context analysis) 2) identifying and selecting suitable interaction concepts and creating a general design for the multimodal application (storyboarding, use cases, interaction concepts, task breakdown, application UI and interaction design), 3) creating modality-specific UI designs, 4) rapid prototyping and 5) evaluating the prototype in naturalistic situations to find key issues to be taken into account in the next iteration. We have not only found clear indications that context affects users' preferences in the usage of modalities and interaction strategies but also identified some of these. For instance, while speech interaction was preferred in the car environment users did not consider it useful when they were walking. 2D (finger strokes) and especially 3D (tilt) gestures were preferred by walking users. ?? 2008 ACM
We introduce and present findings from field trials of MapLens, a mobile augmented reality (AR) digital-physical map system. In our trials we enlisted a mix of 37 earlyadopters, environmental researchers, scouts and their families to use MapLens, to play an environmental awarenessraising location-based game. A comparative trial was run with a non-AR digital system. Analyses of videos, field notes, interviews, questionnaires and user-created content expose phenomena that arise uniquely when using AR maps in the wild. We report on how augmentation affects the way participants use their body and hands, manipulate the mobile device in tandem with the physical map, walk while using, and collaborate. We found that the MapLens solution facilitates place-making by its constant need for referencing to the physical, and in that it also allows for ease of bodily configurations for the group, encourages establishment of common ground, and thereby invites discussion, negotiation and public problem-solving. Its main potential lies not so much in use for navigation but in use as a collaborative tool.
With the recent introduction of mass-market mobile phones with touch-sensitive displays, location, bearing and motion sensing, we are on the cusp of significant progress in a highly interactive mobile social networking. We propose that such systems must work in various contexts, levels of uncertainties and utilize different types of human senses. In order to explore the feasibility of such a system we describe an experiment with a multimodal implementation which allows users to engage in a continuous interaction with each other by using capacitive touch input, visual and/or vibro-tactile feedback and perform a goal-oriented collaborative task of target acquisition. Initial user study found the approach to be interesting and engaging despite the constraints imposed by the interaction method. ?? 2014 Springer-Verlag Berlin Heidelberg
We describe a controlled Wizard-of-Oz study using a mediumfidelity driving simulator investigating how a guided dialog strategy performs when compared to open dialog while driving, with respect to the cognitive loading these strategies impose on the driver. Through our analysis of driving performance logs, speech data, NASA-TLX questionnaires, and bio-signals (heart rate and EEG) we found the secondary speech task to have a measurable adverse effect on driving performance, and that guided dialog is less cognitively demanding in dual-task (driving plus speech interaction) conditions. The driving performance logs and heart rate variability information proved useful for identifying cognitively challenging situations while driving. These could provide important information to an in-car dialog management system that could take into account the driver's cognitive resources to provide safer speech-based interaction by adapting the dialog.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.