Figure 1: A trainee, who interacts with objects to perform the procedure, within the virtual environment, supervised by the trainer ABSTRACTThe use of Virtual Environments for Training is strongly stimulated by important needs for training on sensitive equipments. Yet, developing such an application is often done without reusing existing components, which requires a huge amount of time. We present in this paper a full authoring platform to facilitate the development of both new virtual environments and pedagogical information for procedural training. This platform, named GVT (Generic Virtual Training) relies on innovative models and provides authoring tools which allow capitalizing on the developments realized. We present a generic model named STORM, used to describe reusable behaviors for 3D objects and reusable interactions between those objects. We also present a scenario language named LORA which allows non computer scientists to author various and complex sequences of tasks in a virtual scene. Based on those models, as an industrial validation with Nexter-Group, more than fifty operational scenarios of maintenance training on military equipments have been realized so far. We have also set up an assessment campaign, and we will expose in this paper the first results which show that GVT enables trainees to learn procedures efficiently. The platform keeps on evolving and training on collaborative procedures will soon be available.
This work aims at enhancing a classical video viewing experience by introducing realistic haptic feelings in a consumer environment. More precisely, a complete framework to both produce and render the motion embedded in an audiovisual content is proposed to enhance a natural movie viewing session. We especially consider the case of a first-person point of view audiovisual content and we propose a general workflow to address this problem. This latter includes a novel approach to both capture the motion and video of the scene of interest, together with a haptic rendering system for generating a sensation of motion. A complete methodology to evaluate the relevance of our framework is finally proposed and demonstrates the interest of our approach.
Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.
International audienceWe present in this paper researches focussed on training scenario specification. These researches are conducted in the context of a collaboration with Giat-Industries (a French military manufacturer) in order to introduce Virtual Reality(VR) in maintenance training. The use of VR environments for training is strongly stimulated by important needs of training on sensitive equipment, sometimes fragile, unavaiblable, costly or dangerous. Our project, named GVT, is developped in a research/Industry collaboration. A first version of GVT is already available as a final product, which allows maintenance virtual training on GIAT-Industries equipments. Internal models have been designed in order to achieve reusability and standardization for the efficient development of new virtual training environments. In particular, we defined a scenario language named LORA, which is both textual and graphical. This language lets non-computer scientists author various and complex tasks in a virtual scene
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.