Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user's dialogue with the data, and, ultimately, the user's actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with "best-in-class" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose a operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of "flow", and Norman's gulfs of execution and evaluation.
We present HuddleLamp, a desk lamp with an integrated RGB-D camera that precisely tracks the movements and positions of mobile displays and hands on a table. This enables a new breed of spatially-aware multi-user and multi-device applications for around-the-table collaboration without an interactive tabletop. At any time, users can add or remove displays and reconfigure them in space in an adhoc manner without the need of installing any software or attaching markers. Additionally, hands are tracked to detect interactions above and between displays, enabling fluent cross-device interactions. We contribute a novel hybrid sensing approach that uses RGB and depth data to increase tracking quality and a technical evaluation of its capabilities and limitations. For enabling installation-free ad-hoc collaboration, we also introduce a web-based architecture and JavaScript API for future HuddleLamp applications. Finally, we demonstrate the resulting design space using five examples of cross-device interaction techniques.
We introduce Blended Interaction, a new conceptual framework that helps to explain when users perceive user interfaces as ''natural'' or not. Based on recent findings from embodied cognition and cognitive linguistics, Blended Interaction provides a novel and more accurate description of the nature of human computer interaction (HCI). In particular, it introduces the notion of conceptual blends to explain how users rely on familiar and real-world concepts whenever they learn to use new digital technologies. We apply Blended Interaction in the context of post-''Windows Icons Menu Pointer'' interactive spaces. These spaces are ubiquitous computing environments for computer-supported collaboration of multiple users in a physical space or room, e.g., meeting rooms, design studios, or libraries, augmented with novel interactive technologies and digital computation, e.g., multi-touch walls, tabletops, and tablets. Ideally, in these spaces, the virtues of the familiar physical and social world are combined with that of the digital realm in a considered manner so that desired properties of each are preserved and a seemingly ''natural'' HCI is achieved. To support designers in this goal, we explain how the users' conceptual systems use blends to tie together familiar concepts with the novel powers of digital computation. Furthermore, we introduce four domains of design to structure the underlying problem and design space: individual and social interaction, workflow, and physical environment. We introduce our framework by discussing related work, e.g., metaphors, mental models, direct manipulation, image schemas, reality-based interaction, and illustrate Blended Interaction using design decisions we made in recent projects.
Abstract. We present a proof-of-concept of a mobile navigational aid that uses the Microsoft Kinect and optical marker tracking to help visually impaired people find their way inside buildings. The system is the result of a student project and is entirely based on low-cost hard-and software. It provides continuous vibrotactile feedback on the person's waist, to give an impression of the environment and to warn about obstacles. Furthermore, optical markers can be used to tag points of interest within the building to enable synthesized voice instructions for point-to-point navigation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.