We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking ( Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.
Paper Augmented Digital Documents (PADDs) are digital documents that can be manipulated either on a computer screen or on paper. PADDs, and the infrastructure supporting them, can be seen as a bridge between the digital and the paper worlds. As digital documents, PADDs are easy to edit, distribute and archive; as paper documents, PADDs are easy to navigate, annotate and well accepted in social settings. The chimeric nature of PADDs make them well suited for many tasks such as proofreading, editing, and annotation of large format document like blueprints.We are presenting an architecture which supports the seamless manipulation of PADDs using today's technologies and reports on the lessons we learned while implementing the first PADD system.
Several experiments by psychologists and human factors researchers have shown that when young children execute pointing tasks, they perform at levels below older children and adults. However, these experiments have not provided user interface designers with an understanding of the severity or the nature of the difficulties young children have when using input devices. To address this need, we conducted a study to gain a better understanding of 4 and 5 year-old children's use of mice. We compared the performance of thirteen 4 year-olds, thirteen 5 year-olds and thirteen young adults in point-and-click tasks. Plots of the paths taken by the participants show severe differences between adults' and preschool children's ability to control the mouse. We were not surprised then to find age had a significant effect on accuracy, target reentry, and efficiency. We also found that target size had a significant effect on accuracy and target reentry. Measuring movement time at four different times (first entering target, last entering target, pressing button, releasing button) yielded the result that Fitts' law models children well only up to the time they first enter the target. Overall, we found that the difference between the performance of children and adults was large enough to warrant user interface interactions designed specifically for preschool children. The results additionally suggest that children need the most help once they get close to targets.
This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multitouch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.
As computers become more ubiquitous, direct interaction with wall-size, high-resolution displays will become commonplace. The familiar desktop computer interface is ill-suited to the affordances of these screens, such as size, and capacity for using pen or finger as primary input device.Current Graphical User Interfaces (GUIs) do not take into account the cost of reaching for a faraway menu bar, for example, and they rely heavily on the keyboard for rapid interactions. GUIs are extremely powerful, but their interaction style contrasts sharply with the casual interaction style available with traditional wall-size displays such as whiteboards and bulletin boards.This thesis explores how to bridge the gap between the power provided by current desktop computer interfaces and the fluid use of whiteboards and pin-boards. Based on our observations of fluid expert interactions from everyday life, such as driving a car or playing a violin, we have designed and built a fluid interaction framework which encourages gesture memory, reduces the need for dialog with the user, and provides a scoping mechanism for modes. Together, these features progressively make the cognitive load of using the interface disappear. The user becomes free to focus on other tasks, the same way one can drive a car while conversing with a passenger.To validate our design, we built the Stanford Interactive Mural, a 9 Mpixel whiteboard-size screen, evaluated the performance of our proposed menu system FlowMenu, and implemented two applications using our framework. The Geometer's Workbench allows one to explore differential geometry; PostBrainstorm is a brainstorm tool that lets users gather and organize sketches, snapshots of physical documents, and a variety of digital documents on the Interactive Mural. PostBrainstorm was tested in brainstorming sessions by professional designers. It demonstrates the feasibility of fluid, transparent interactions for complex, real life applications. v Acknowledgments I would like to thank the many people who helped me as I did the work described in this dissertation. This work would not have been possible without their support.It was a real privilege to work with my advisor, Terry Winograd. We had many long and insightful discussions exploring how human computer interactions can be improved. Even though we sometimes disagreed on the specifics, I believe we share the same vision in which humans and computers will be engaged in a synergetic relationship, each leveraging the strength of the other while minimizing the other's weaknesses. Terry was always ready to discuss new ideas and was always very supportive during the four or so years of our collaboration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.