End-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating endto-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.
Trace figures are contour drawings of people and objects that capture the essence of scenes without the visual noise of photos or other visual representations. Their focus and clarity make them ideal representations to illustrate designs or interaction techniques. In practice, creating those figures is a tedious task requiring advanced skills, even when creating the figures by tracing outlines based on photos. To mediate the process of creating trace figures, we introduce the open-source tool Esquisse. Informed by our taxonomy of 124 trace figures, Esquisse provides an innovative 3D model staging workflow, with specific interaction techniques that facilitate 3D staging through kinematic manipulation, anchor points and posture tracking. Our rendering algorithm (including stroboscopic rendering effects) creates vector-based trace figures of 3D scenes. We validated Esquisse with an experiment where participants created trace figures illustrating interaction techniques, and results show that participants quickly managed to use and appropriate the tool.
Figure 1: A user interacting with ForceEdge on a laptop computer (left) and on a smartphone (right). Left: She wants to select a large portion of text. In that purpose, (a) she presses the physical button of the trackpad and then (b) moves her finger on the trackpad to move the pointer in the control area. (c) She controls scrolling rate by varying the force applied to the trackpad. Right: she wants to move an object. (a) she starts moving the object, then (b) moves her finger in the control area and (c) controls scrolling rate by varying the force applied to the touchscreen.
Jitter in interactive systems occurs when visual feedback is per ceived as unstable or trembling even though the input signal is smooth or stationary. It can have multiple causes such as sens ing noise, or feedback calculations introducing or exacerbating sensing imprecisions. Jitter can however occur even when each individual component of the pipeline works perfectly, as a result of the differences between the input frequency and the display refresh rate. This asynchronicity can introduce rapidly-shifting latencies between the rendered feedbacks and their display on screen, which can result in trembling cursors or viewports. This paper contributes a better understanding of this particular type of jitter. We first detail the problem from a mathematical standpoint, from which we develop a predictive model of jitter amplitude as a function of input and output fre quencies, and a new metric to measure this spatial jitter. Using touch input data gathered in a study, we developed a simulator to validate this model and to assess the effects of different techniques and settings with any output frequency. The most promising approach, when the time of the next display refresh is known, is to estimate (interpolate or extrapolate) the user's position at a fixed time interval before that refresh. When input events occur at 125 Hz, as is common in touch screens, we show that an interval of 4 to 6 ms works well for a wide range of display refresh rates. This method effectively cancels most of the jitter introduced by input/output asynchronicity, while introducing minimal imprecision or latency.
Operating systems support autoscroll to allow users to scroll a window while in dragging mode: the user moves the pointer near the window's edge to trigger an "automatic" scrolling. Scrolling rate is typically proportional to the distance between the pointer and the window's edge. This approach suffers from several problems, especially when the window is maximized resulting in a very limited space around the window. An other problem is that for some operations, such as object drag-and-drop, the source and destination might be located in different windows, making it complicated for the computer system to understand user's intention. In this paper, we present ForceEdge, a novel autoscroll technique relying on trackpads with force-sensing capabilities to alleviate all the problems related to autoscroll. We present the theoretical foundations of ForceEdge and the implementation of a demonstrator that can be used to compare ForceEdge to the other autoscroll methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.