Timing records of the way users interact with applications can lead to useful insights into human behavior and an application's usability. Timing records can be useful for comparing different software applications and are also necessary for building and testing user models.Westerman et al. (1996) described three different methods through which user information can be obtained. The first method, using videotape or a human recorder, is not always feasible. The video recording must be analyzed, and the human recorder has known limitations. Camtasia (TechSmith) is an example of a tool to record and replay user interactions as a video file. However, timing information cannot be directly obtained, and the tool creates logs that, being videos, are resource intensive.The second method, instrumentation, is the augmentation of an interface so that it records the user's actions. This approach can be used only if the system studied includes logging or can be modified to include logging, so it is not a general solution; information collected in this way is limited to software that can be instrumented. For example, some commercial applications, such as Microsoft Word and many games, cannot be instrumented to record user behavior.The third method is to include an unobtrusive application that exists in the background, can be used generically across all applications, and can record and timestamp user behavior. In this article, we introduce such a program: Recording User Input (RUI), which belongs to the set of programs known as keystroke logging tools. Several such tools currently exist, but there are few that record both mouse and key events and even fewer that give logs in a form that would be useful to researchers and practitioners of human-computer interaction. Several of these recording and playback tools can be downloaded from the Web, but typically these were developed as malicious "spyware" to provide keystroke logs without timing information.A tool similar to RUI was developed (Westerman et al., 1996) for the Windows 3.1 platform, but it appears no longer to be available. MICELAB (Baccino & Kennedy, 1995) is another similar tool, but it does not run on modern computers, although Baccino and Kennedy's analysis approach is still helpful in analyzing mouse movements. Another tool, InputLogger, can be used to obtain user interactions across generic interfaces, but it works only on the Classic Macintosh (pre-Mac OS X) platform (Trewin, 1998). Several large commercial products are also available. Description of RUIRUI is a keystroke and mouse action logger for the Windows (2000 and XP) and Mac OS X (10.3 and later versions) platforms. RUI's user interface includes options for which types of actions to record, including keystrokes and mouse movements. The data collected is stored in a log file (see Figure 1) as a list of the timestamps, actions, and arguments (if any), such as key pressed or move location. To start and stop recording, the hot keys Ctrl R and Ctrl S, respectively, are provided. The amount of data recorded with...
This article describes progress in providing user models with sufficient visual information and motor control to perform teleoperation with an unmodified, physically realized robot. User models that are built by extending cognitive models to interact directly with interfaces can provide a theoretical basis for predicting user behavior. These models can help in summarizing and explaining usability issues in domains for which conventional user testing is too time consuming, too demanding of other resources, or too dynamic for static models. The user model consists of an ACT-R cognitive model and the SegMan image-processing and interpretation system. ACT-R supports directing simple rover navigation and response-time predictions. SegMan supports interpreting many aspects of HCI interfaces and can now interpret simple aspects of video used in simple navigation tasks and can generate key presses and mouse actions directly. Processing limited amounts of an image as a human fovea helped make this system work in real time. A study in robot teleoperation provides evidence that the cognitive and perceptual-motor model approximates human behavior (based on comparison of task time, learning behavior, and mouse actions) in a simple navigation task. This work demonstrates how user modeling techniques are maturing to the extent that they can be used for assessing interfaces for dynamic tasks by predicting performance during teleoperation, a common human-robot interaction task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.