An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of humancomputer interaction @ICI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over di&rent visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new f?amework for human-computer interface evaluation and design.
The Virtual Hand Lab (VHL) is an an augmented reality envimnment for conducting experiments in human perception and motor performance that involve grasping, manipulation, and other 3D tasks that people perform with their hands. The hardware and software testbed supports both physical and virtual objects, and object behaviors that cm be specified in advance by experimenters. A testbed for conducting experiments must provide visual stimuli that depend on the configuration of the experimental apparatus, on the specific tasks that are being studied, and on the individual characteristics of each subject. Calibration is an important concern and is the subject of this paper. A proper design leads to independent calibration steps that modularize the subsystems that require calibration and explicitly recognize and order the dependencies among them. We describe how the architecture for the VHL was designed to support independent apparatus-specific, experiment-specific, and subject-specific calibrations. The architecture offers benefits for any augmented reality environment by reducing m-calibration times and identifying appropriate modularization in the software that can result in a more robust and efficient implementation.
This paper investigates human bias, consistency and individual differences when performing object manipulation in a virtual environment. Eight subjects were asked to manipulate a wooden cube to match a 3-D graphic target cube presented in 3 locations and 2 orientations. There were two visual conditions for the experiment: the subject performed the tasks with or without vision of the hand and the wooden cube. The constant errors of object translation and orientation suggested specific human biases. In terms of the variable errors, visual feedback appeared to be more critical for object transportation than object orientation. It was also found that individual differences were more pronounced in human bias than in consistency during object manipulation. These results suggest tolerance for human bias and variability should be accommodated in humancomputer interface design.
Augmented reality allows changes to be made to the visual perception of object size even while the tangible components remain completely unaltered. It was, therefore, utilized in a study whose results are being reported here to provide the proper environment required to thoroughly observe the exact effect that visual change to object size had on programming fingertip forces when objects were lifted with a precision grip. Twenty-one participants performed repeated lifts of an identical grip apparatus to a height of 20 mm, maintained each lift for 8 seconds, and then replaced the grip apparatus on the table. While all other factors of the grip apparatus remained unchanged, visual appearance was altered graphically in a 3-D augmented environment. The grip apparatus measured grip and load forces independently. Grip and load forces demonstrated significant rates of increase as well as peak forces as the size of graphical images increased; an aspect that occurred in spite of the fact that extraneous haptic information remained constant throughout the trials. By indicating a human tendency to rely - even unconsciously - on visual input to program the forces in the initial lifting phase, this finding provides further confirmation of previous research findings obtained in the physical environment; including the possibility of extraneous haptic effects (Gordon et al. 1991a, Mon-Williams and Murray 2000, Kawai et al. 2000). The present results also suggest that existing knowledge concerning human manipulation tasks in the physical world may be applied to an augmented environment where the physical objects are enhanced by computer generated visual components.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.