We evaluate the performance and usability of mouse-based, touchbased, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants' performance in 3D manipulation using a docking task. We measured completion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require different interaction times. We also discuss our qualitative observations as well as people's preferences and put our findings into context of the practical application domain of 3D data analysis environments.
We examine a class of techniques for 3D object manipulation on mobile devices, in which the device's physical motion is applied to 3D objects displayed on the device itself. This "local coupling" between input and display creates specific challenges compared to manipulation techniques designed for monitor-based or immersive virtual environments. Our work focuses specifically on the mapping between device motion and object motion. We review existing manipulation techniques and introduce a formal description of the main mappings under a common notation. Based on this notation, we analyze these mappings and their properties in order to answer crucial usability questions. We first investigate how the 3D objects should move on the screen, since the screen also moves with the mobile device during manipulation. We then investigate the effects of a limited range of manipulation and present a number of solutions to overcome this constraint. This work provides a theoretical framework to better understand the properties of locally-coupled 3D manipulation mappings based on mobile device motion.
Manipulating slice planes is an important task for exploring volumetric datasets. Since this task is inherently 3D, it is difficult to accomplish with standard 2D input devices. Alternative interaction techniques have been proposed for direct and natural 3D manipulation of slice planes. However, they also require bulky and dedicated hardware, making them inconvenient for everyday work. To address this issue, we adapted two of these techniques for use in a portable and self-contained handheld AR environment. The first is based on a tangible slicing tool, and the other is based on a spatially aware display. In this paper, we describe our design choices and the technical challenges encountered in this implementation. We then present the results, both objective and subjective, from an evaluation of the two slicing techniques. Our study provides new insight into the usability of these techniques in a handheld AR setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.