Investigating virtual environments has become an increasingly interesting research topic for engineers, computer and cognitive scientists, and psychologists. Although there have been several recent studies focused on the development of multimodal virtual environments (VEs) to study human-machine interactions, less attention has been paid to human-human and human-machine interactions in shared virtual environments (SVEs), and to our knowledge, no attention paid at all to what extent the addition of haptic communication between people would contribute to the shared experience. We have developed a multimodal shared virtual environment and performed a set of experiments with human subjects to study the role of haptic feedback in collaborative tasks and whether haptic communication through force feedback can facilitate a sense of being and collaborating with a remote partner. The study concerns a scenario where two participants at remote sites must co-operate to perform a joint task in a SVE. The goals of the study are (1) to assess the impact of force feedback on task performance, (2) to better understand the role of haptic communication in human-human interactions, (3) to study the impact of touch on the subjective sense of collaborating with a human as reported by the participants based on what they could see and feel, and (4) to investigate if gender, personality, or emotional experiences of users can affect haptic communication in SVEs. The outcomes of this research can have a powerful impact on the development of next generation human-computer interfaces and network protocols that integrate touch and force feedback technology into the Internet, development of protocols and techniques for collaborative teleoperation such as hazardous material removal, space station repair, and remote surgery, and enhancement of virtual environments for performing collaborative tasks in shared virtual worlds on a daily basis such as co-operative teaching, training, planning and design, cybergames, and social gatherings. Our results suggest that haptic feedback significantly improves the task performance and contributes to the feeling of 'sense of togetherness' in SVEs. In addition, the results show that the experience of visual feedback only at first, and then subsequently visual plus haptic feedback elicits a better performance than presentation of visual plus haptic feedback first followed by visual feedback only.
We have developed a computer-based training system to simulate laparoscopic procedures in virtual environments (VEs) for medical training. The major hardware components of our system include a computer monitor to display visual interactions between three-dimensional (3-D) virtual models of organs and instruments together with a pair of force feedback devices interfaced with laparoscopic instruments to simulate haptic interactions. In order to demonstrate the practical utility of the training system, we have chosen to simulate a surgical procedure that involves inserting a catheter into the cystic duct using a pair of laparoscopic forceps. This procedure is performed during laparoscopic cholecystectomy (gallbladder removal) to search for gallstones in the common bile duct. Using the proposed system, the user can be trained to grasp and insert a flexible and freely moving catheter into the deformable cystic duct in virtual environments. As the catheter and the duct are manipulated via simulated laparoscopic forceps, the associated deformations are displayed on the computer screen and the reaction forces are fed back to the user through the force feedback devices. A hybrid modeling approach was developed to simulate the real-time visual and haptic interactions that take place between the forceps and the catheter, as well as the duct; and between the catheter and the duct. This approach combines a finite element model and a particle model to simulate the flexible dynamics of the duct and the catheter, respectively. To simulate the deformable dynamics of the duct in real-time using finite element procedures, a modal analysis approach was implemented such that only the most significant vibration modes of the duct were selected to compute the deformations and the interaction forces. The catheter was modeled using a set of virtual particles that were uniformly distributed along the centerline of catheter and connected to each other via linear and torsional springs and damping elements. In order to convey to the user a sense of touching and manipulating deformable objects through force feedback devices, two haptic interaction techniques that we have developed before were employed. The interactions between the particles of the catheter and the duct were simulated using a point-based haptic interaction technique. The interactions between the forceps and the duct as well as the catheter were simulated using the ray-based haptic interaction technique where the laparoscopic forceps were modeled as connected line segments. ).Publisher Item Identifier S 1083-4435(01)08145-5.Cagatay Basdogan received the Ph.D. degree in mechanical engineering from Southern Methodist University, Dallas, TX, in 1994. Previously, he worked as a scientist at NASA-Johnson Space Center, Houston, Tx, and Northwestern University Research Park, Evanston, IL. He was also a research scientist with the Massachusetts Institute of Technology, Cambridge, for four years. He joined the Jet Propulsion Laboratory (JPL), of the California Institute of Technology, Pa...
Computer haptics, an emerging field of research that is analogous to computer graphics, is concerned with the generation and rendering of haptic virtual objects. In this paper, we propose an efficient haptic rendering method for displaying the feel of 3-D polyhedral objects in virtual environments (VEs). Using this method and a haptic interface device, the users can manually explore and feel the shape and surface details of virtual objects. The main component of our rendering method is the “neighborhood watch” algorithm that takes advantage of precomputed connectivity information for detecting collisions between the end effector of a force-reflecting robot and polyhedral objects in VEs. We use a hierarchical database, multithreading techniques, and efficient search procedures to reduce the computational time such that the haptic servo rate after the first contact is essentially independent of the number of polygons that represent the object. We also propose efficient methods for displaying surface properties of objects such as haptic texture and friction. Our haptic-texturing techniques and friction model can add surface details onto convex or concave 3-D polygonal surfaces. These haptic-rendering techniques can be extended to display dynamics of rigid and deformable objects.
Virtual environments (VEs) that enable the user to touch, feel, and manipulate virtual objects through haptic interactions are expected to have applications in many areas such as medicine, CAD/CAM, entertainment, fine arts, and education. The current state of technology allows the human operator to interact with virtual objects through the probe (such as a thimble or a stylus) of a force-reflecting haptic interface. Most of the current haptic interaction algorithms model the probe as a single point and allow the user to feel the forces that arise from point interactions with virtual objects. In this paper, we propose a ray-based haptic-rendering algorithm that enables the user to touch and feel convex polyhedral objects with a line segment model of the probe. The ray-based haptic-rendering algorithm computes both forces and torques due to collisions of the tip and/or side of the probe with multiple virtual objects, as required in simulating many tool-handling applications. Since the real-time simulation of haptic interactions between a 3D tool and objects is computationally quite expensive, the ray-based rendering can be considered as an intermediate step toward achieving this goal by simplifying the computational model of the tool. To compare the ray- and point-based haptic interaction techniques in the haptic perception of 3D objects, we conducted perceptual experiments in which the participants were asked to identify the shape of four different 3D primitives (sphere, cone, cylinder, and cube) that were displayed in random order using both point-and ray-based techniques. The results of the study show that on average, 3D objects are recognized faster with ray-based rendering than with point-based rendering.
In this paper, we propose a new ray-based haptic rendering method for displaying 3D objects in virtual environments. We have developed a set of software algorithms that work with a force reflecting haptic interface and enable the user to touch and feel arbitrary 3D polyhedral virtual objects. Using the interface device and the suggested model, the user feels as if exploring the shape and surface details of objects such as textures. The components of the model include a hierarchical database for storing geometrical and material properties of objects, collision detection algorithms, a simple mechanistic model for computing the forces of interaction between the 3D virtual objects and the force-reflecting device, and a haptic filtering technique for simulating surface details of objects such as smoothness and texture. The developed algorithms together with a haptic interface device have several applications in areas such as medicine, education, computer animation, teleoperation, entertainment, and rehabilitation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.