+ 1 -5 1 0 -6 4 3 -7 1 0 6 h t t p : / / w w w . c s . b e r k e l e y . e d u / -{ allanl,landay, r o w e } ABSTRACT Pen-based user interfaces are becoming ever more popular. G e s t u r e s (i.e., m a r k s made with a pen to i n v o k e a command) are a valuable aspect of pen-based UIs, but they also have drawbacks. The challenge in designing good gestures is to make them easy for people to learn and remember. With the goal of better gesture design, we performed a pair of experiments to determine why users find gestures similar. From these experiments, we have derived a computational model for predicting perceived gesture similarity that correlates 0.56 with observation. We will incorporate the results of these experiments into a gesture design tool, which will aid the pen-based UI designer in creating gesture sets that are easier to learn and more memorable. KeywordsPen-based user interfaces, pen gestures, multi-dimensional scaling, similarity, perception el=iv ::a'OOO CHI Letters volume 2 • issue 1
Digital video is becoming increasingly ubiquitous. However, editing video remains difficult for several reasons: it is a time-based medium, it has dual tracks of audio and video, and current tools force users to work at the smallest level of detail. Based on interviews with professional video editors, we developed a video editor, called Silver, that uses metadata to make digital video editing more accessible to novices. To help users visualize video, Silver provides multiple views with different semantic content and at different levels of abstraction, including storyboard, editable transcript, and timeline views. Silver offers smart editing operations that help users resolve the inconsistencies that arise because of the different boundaries in audio and video. We conducted a preliminary user study to investigate the effectiveness of the Silver smart editing. Participants successfully edited video after only a short tutorial, both with and without smart editing assistance. Our research suggests several ways in which video editing tools could use metadata to assist users in the reuse and composition of video.
The human factor is often described as the weakest part of a security system and users are often described as the weakest link in the security chain. This workshop will seek to understand the roles and demands placed on users of security systems, and explore design solutions that can assist in making security systems usable and effective. In addition to examining end-users, this workshop will also examine the issues faced by security system developers and operators. The primary motivation for the workshop is that previous research on HCI and Security (HCISEC) has been scattered in different conferences and journals, and information sharing has been difficult. The goal of the workshop is to build a more cohesive and active HCISEC community of researchers and practitioners. This will be done by building a network of interested people, sharing research activities and results, discussing high priority areas for research and development, and exploring opportunities for collaboration.
It is difficult to interact with computer displays that are across the room. A popular approach is to use laser pointers tracked by a camera, but interaction techniques using laser pointers tend to be imprecise, error-prone, and slow. Although many previous papers discuss laser pointer interaction techniques, none seem to have performed user studies to help inform the design. This paper reports on two studies of laser pointer interactions that answer some of the questions related to interacting with objects using a laser pointer. The first experiment evaluates various parameters of laser pointers. For example, the time to acquire a target is about 1 second, and the jitter due to hand unsteadiness is about ±8 pixels, which can be reduced to about ±2 to ±4 pixels by filtering. We compared 7 different ways to hold various kinds of laser pointers, and found that a laser pointer built into a PalmOS device was the most stable. The second experiment compared 4 different ways to select objects on a large projected display. We found that tapping directly on a wall-size SmartBoard was the fastest and most accurate method, followed by a new interaction technique that copies the area of interest from the big screen to a handheld. Third in speed was the conventional mouse, and the laser pointer came in last, with a time almost twice as long as tapping on the SmartBoard.
1Today, state-of-the-art user interfaces often include new interaction technologies, such as speech recognition, computer vision, or gesture recognition. Unfortunately, these technologies are difficult for most interface designers to incorporate into their interfaces, and traditional tools do not help designers with these technologies. One such technology is pen gestures, which are valuable as a powerful pen-based interaction technique, but are difficult to design well. We developed an interface design tool that uses unsolicited advice to help designers of pen-based user interfaces create pen gestures. Specifically, the tool warns designers when their gestures will be perceived to be similar and advises designers how to make their gestures less similar. We believe that the issues we encountered while designing an interface for advice and implementing this advice will reappear in design tools for other novel input technologies, such as hand and body gestures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.