Evaluation and planning of assembly processes in virtual environments have become an active research area in engineering community. However, planning of complex assemblies in virtual environments, especially large-scale virtual environments, is still hindered by limitations like unnatural user interaction, insufficient frame rates, and deficiencies in processing of assembly constraints. In this paper, we present MIVAS, a Multi-modal Immersive Virtual Assembly System. By viewing the virtual assembly system as a finite state machine, we incorporate tracked devices, force feedback dataglove, voice commands, human sounds, fully immersive 4-sided CAVE, together with optimization techniques for both complex assembly models and assembly operations to provide for engineers an intuitive and natural way of assembly evaluation and planning. Testing scenarios on disassembling different components of an intelligent hydraulic excavator are described. Special attention is paid upon such technical issues as interface between CAD packages and the CAVE virtual environment, natural and intuitive user interaction including realistic virtual hand interaction and force feedback, intelligent navigation for assembly operations, and real-time display of complex assemblies.
In virtual environments, virtual hand interactions play key roles in the human-computer interface. Specifically, the virtual grasping of 3D objects provides an intuitive way for users to interact with virtual objects. This paper demonstrates the creation of a sophisticated virtual hand model simulating natural anatomy in its appearance and motion. To achieve good visual realism, the virtual hand is modeled with metaball modeling, and visually enhanced by applying texture mapping. For realistic kinematics modeling, a three-layer model (skeleton, muscle and skin layers) is adopted to handle the motion as well as the deformation of the virtual hand. We also present an approach for virtual grasping of 3D objects with the realistic virtual hand driven by a CyberGlove dataglove. Grasping heuristics are proposed based on the classification with the shapes of objects, and simplified proxies for the virtual hand are used for the purpose of real-time collision detection between the virtual hand and 3D objects.
The sense of touch is an important way for humans to feel the world. It is very important to provide realistic haptic feedback in virtual assembly applications as to enhancing immersion experience and improving efficiency. This paper presents a novel approach for grasp identification and multi-finger haptic feedback for virtual assembly. Firstly, the Voxmap-PointShell (VPS) algorithm is adapted and utilized to detect collisions between a dexterous virtual hand and a mechanical component or between two mechanical components, and collision detection results are used to guide the motion of a virtual hand. Then collision forces at collision points are calculated (using Hooke Law), classified and converted. Finally, forces received at fingertips of a virtual hand are used to identify whether or not a virtual hand can grasp a mechanical component, and are mapped to exert forces at user’s fingertips with a CyberGrasp force feedback system. Our approach has been incorporated and verified in a CAVE virtual environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.