Tapping on surfaces in a typical virtual environment feels like contact with soft foam rather than a hard object. The realism of such interactions can be dramatically improved by superimposing event-based, high-frequency transient forces over traditional position-based feedback. When scaled by impact velocity, hand-tuned pulses and decaying sinusoids produce haptic cues that resemble those experienced during real impacts. Our new method for generating appropriate transients inverts a dynamic model of the haptic device to determine the motor forces required to create prerecorded acceleration profiles at the user's fingertips. After development, the event-based haptic paradigm and the method of acceleration matching were evaluated in a carefully controlled user study. Sixteen individuals blindly tapped on nine virtual and three real samples, rating the degree to which each felt like real wood. Event-based feedback achieved significantly higher realism ratings than the traditional rendering method. The display of transient signals made virtual objects feel similar to a real sample of wood on a foam substrate, while position feedback alone received ratings similar to those of foam. This work provides an important new avenue for increasing the realism of contact in haptic interactions.
Robots which interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces. Additionally, for certain tasks, robots may need to know the haptic properties of an object before touching it. To enable better tactile understanding for robots, we propose a method of classifying surfaces with haptic adjectives (e.g., compressible or smooth) from both visual and physical interaction data. Humans typically combine visual predictions and feedback from physical interactions to accurately predict haptic properties and interact with the world. Inspired by this cognitive pattern, we propose and explore a purely visual haptic prediction model. Purely visual models enable a robot to "feel" without physical interaction. Furthermore, we demonstrate that using both visual and physical interaction signals together yields more accurate haptic classification. Our models take advantage of recent advances in deep neural networks by employing a unified approach to learning features for physical interaction and visual observations. Even though we employ little domain specific knowledge, our model still achieves better results than methods based on hand-designed features.
Despite its expected clinical benefits, current teleoperated surgical robots do not provide the surgeon with haptic feedback largely because grounded forces can destabilize the system's closed-loop controller. This paper presents an alternative approach that enables the surgeon to feel fingertip contact deformations and vibrations while guaranteeing the teleoperator's stability. We implemented our cutaneous feedback solution on an Intuitive Surgical da Vinci Standard robot by mounting a SynTouch BioTac tactile sensor to the distal end of a surgical instrument and a custom cutaneous display to the corresponding master controller. As the user probes the remote environment, the contact deformations, dc pressure, and ac pressure (vibrations) sensed by the BioTac are directly mapped to input commands for the cutaneous device's motors using a model-free algorithm based on look-up tables. The cutaneous display continually moves, tilts, and vibrates a flat plate at the operator's fingertip to optimally reproduce the tactile sensations experienced by the BioTac. We tested the proposed approach by having eighteen subjects use the augmented da Vinci robot to palpate a heart model with no haptic feedback, only deformation feedback, and deformation plus vibration feedback. Fingertip deformation feedback significantly improved palpation performance by reducing the task completion time, the pressure exerted on the heart model, and the subject's absolute error in detecting the orientation of the embedded plastic stick. Vibration feedback significantly improved palpation performance only for the seven subjects who dragged the BioTac across the model, rather than pressing straight into it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.