Virtual prototyping tools have already captivated the industry's interest as a viable design tool. One of the key challenges for the research community is to extend the capabilities of Virtual Reality technology beyond its current scope of ergonomics and design reviews. The research presented in this paper is part of a larger research programme that aims to perform maintainability assessment on virtual prototypes. This paper discusses the design and implementation of a geometric constraint manager that has been designed to support physical realism and interactive assembly and disassembly tasks within virtual environments. The key techniques employed by the constraint manager are direct interaction, automatic constraint recognition, constraint satisfaction and constrained motion. Various optimisation techniques have been implemented to achieve real-time interaction with large industrial models.
Virtual environment technology is now beginning to be recognised as a powerful design tool in industrial sectors such as Manufacturing, Process Engineering, Construction, Automotive and Aerospace industries. It offers the ability to visualise a design from different viewpoints by engineers from different design perspectives providing a powerful design analysis tool for supporting concurrent engineering philosophy. A common weakness of the current commercial virtual environments is the lack of efficient geometric constraint management facilities such as run-time constraint detection and the maintenance of constraint consistencies for supporting accurate part positioning and constrained 3D manipulations. The environments also need to be designed to support the user as they are completing their task. This paper describes the software architecture of a constraintbased virtual environment that supports interactive assembly of component parts, embedded within a task based environment that supports contextual help and allows for the structure of tasks to be easily altered for rapid prototyping.
This paper outlines the development of a virtual environment for constraint based assembly and maintenance task simulation and analysis of large scale mechanical products. It is important that allowance is made for maintenance activities during a products design phase to help reduce the lifetime operating costs of such large scale mechanical products. Unfortunately, current CAE systems do not support such assessment capabilities, with these issues typically being addressed using expensive and time consuming physical prototypes. The design and implementation of an immersive virtual prototyping environment for supporting the assessment of assemblability and maintainability of large scale mechanical products, along with its use as a maintenance training tool will be outlined. Procedures and environments for realistic component assembly, constraint recognition, automatic disassembly sequence generation, and maintenance review and training are presented. The simulation of maintenance operations allows for maintenance to be addressed early in the design stages. This reduces unforeseen problems creeping into the design as it progresses through its life cycle, consequently saving both time and money while improving product quality.
Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the world's first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each other's eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multiway interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye-and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.