We introduce an input system that is based on bidirectional strokes that are segmented by tactile landmarks. By giving the user tactile feedback about the length of a stroke during input, we decrease the dependence of the GUI on the visual display. By concatenating separate strokes into multi-strokes, complex commands may be entered, which may encode commands, data content, or both simultaneously. To demonstrate their power, we show how multi-strokes can be used to traverse a menu hierarchy quickly. In addition, we show how inter-landmark segments of the sensor may be used for continuous and discrete parameter entry, resulting in a multifunctional interaction paradigm. We also introduce multi-widgets, which allow the direct control of multiple virtual widgets without the need to change the state of the device or use modifier buttons. This approach to input does not depend on material displayed visually to the user, and, thanks to tactile guidance, may be used by expert users as an eyes-free user interface. We believe that these benefits make this interaction system especially suitable for wearable computer systems that use a head-worn display and wrist-worn watch-style devices.
One of the major limitations of portable computing devices is the small size of their built-in displays. Fortunately, extremely small projection systems are being developed that can be integrated into devices that are small enough to be body-worn, yet can project a large image onto surfaces in the environment. To explore how a user might interact with this near-horizon technology, we created a functional simulation of a wrist-worn projector. We then developed a set of interaction techniques that assume that the wrist-worn computer and projector are equipped with position and orientation sensors, in addition to a touch-sensitive builtin screen. To complement the techniques that rely on the spatial manipulation of the user's forearm and the device itself, we also describe the use of a cursorless watch user interface that minimizes the need for the user to look down at the device's built-in screen. Finally, we present a sample application that illustrates our interaction techniques.
We present a set of interaction techniques that make novel use of a small pressure-sensitive pad to allow one-handed direct control of a large number of parameters. The surface of the pressure-sensitive pad is logically divided into four linear strips which simulate traditional interaction metaphors and the functions of which may be modified dynamically under software control. No homing of the hand or fingers in needed once the fingers are placed above their corresponding strips. We show how the number of strips on the pad can be virtually extended from four to fourteen by detecting contact pressure differences and dual-finger motions. Due to the compact size of the device and the method of interaction, which does not rely on on-screen widgets or the 2D navigation of a cursor, the versatile input system may be used in applications, where it is advantageous to minimize the amount of visual feedback required for interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.