Designing collaborative interfaces for tabletops remains difficult because we do not fully understand how groups coordinate their actions when working collaboratively over tables. We present two observational studies of pairs completing independent and shared tasks that investigate collaborative coupling, or the manner in which collaborators are involved and occupied with each other's work. Our results indicate that individuals frequently and fluidly engage and disengage with group activity through several distinct, recognizable states with unique characteristics. We describe these states and explore the consequences of these states for tabletop interface design.
Most graphical user interfaces provide visual cursors to facilitate interaction with input devices such as mice, pointers, and pens. These cursors often include directional cues that could influence the stimulus-response compatibility of user input. We conducted a controlled evaluation of four cursor orientations and an orientationneutral cursor in a circular menu selection task. Mouse interaction on a desktop, pointer (i.e. wand) interaction on a large screen, and pen interaction on a Tablet PC were evaluated. Our results suggest that choosing appropriate cursors is especially important for pointer interaction, but may be less important for mice or pens. Cursors oriented toward the lower-right corner of a display yielded the poorest performance overall while orientation-neutral cursors were generally the best. Advantages were found for orientations aligned with the direction of movement. We discuss these results and suggest guidelines for the appropriate use of cursors in various input and display configurations.
We conducted an ethnographic field study examining how a building design team used representational artifacts to coordinate the design of building systems, structure, and architecture. The goals of this study were to characterize the different interactions meeting participants had with design artifacts, to identify bottlenecks in the design coordination process, and to develop design considerations for CSCW technology that will support in-person design coordination meetings of building design teams. We found that gesturing, navigation, annotation, and viewing were the four primary interactions meeting participants had with design artifacts. The form of the design information (2D vs. 3D, digital vs. physical) had minimal impact on gesture interactions, although navigation varied significantly with different representations of design information. Bottlenecks in the design process were observed when meeting participants attempted to navigate digital information, interact with wall displays, and access information individually and as a group. Based on our observations, we present some possible directions for future CSCW technologies, including new mechanisms for digital bookmarking, interacting with 2D and 3D design artifacts simultaneously, and enriched pointing techniques and pen functionality.
Abstract. The two visual systems hypothesis in neuroscience suggests that pointing without visual feedback may be less affected by spatial visual illusions than cognitive interactions such as judged target location. Our study examined predictions of this theory for target localization on a large-screen display. We contrasted pointing interactions under varying levels of visual feedback with location judgments of targets that were surrounded by an offset frame. As predicted by the theory, the frame led to systematic errors in verbal report of target location but not in pointing without visual feedback for some participants. We also found that pointing with visual feedback produced a similar level of error as location judgments, while temporally lagged visual feedback appeared to reduce these errors somewhat. This suggests that pointing without visual feedback may be a useful interaction technique in situations described by the two visual systems literature, especially with large-screen displays and immersive environments.
Neuroanatomical evidence indicates the human eye's visual field can be functionally divided into two vertical hemifields, each specialized for specific functions. The upper visual field (UVF) is specialized to support perceptual tasks in the distance, while the lower visual field (LVF) is specialized to support visually-guided motor tasks, such as pointing. We present a user study comparing mouse-and touchscreen-based pointing for items presented in the UVF and LVF on an interactive display. Consistent with the neuroscience literature, we found that mouse and touchscreen pointing were faster and more accurate for items presented in the LVF when compared to pointing at identical targets presented in the UVF. Further analysis found previously unreported performance differences between the visual fields for touchscreen pointing that were not observed for mouse pointing. This indicates that a placement of interactive items favorable to the LVF yields superior user performance, especially for systems dependent on direct touch interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.