In order to adapt to an ever-changing set of threats, military forces need to find new methods of training. The prevalence of commercial game engines combined with virtual reality (VR) and mixed reality environments can prove beneficial to training. Live, virtual and constructive (LVC) training combines live people, virtual environments and simulated actors to create a better training environment. However, integrating virtual reality displays, software simulations and artificial weapons into a mixed reality environment poses numerous challenges. A mixed reality environment known as The Veldt was constructed to research these challenges. The Veldt consists of numerous independent displays, along with movable walls, doors and windows. This allows The Veldt to simulate numerous training scenarios. Several challenges were encountered in creating this system. Displays were precisely located using the tracking system, then configured using VR Juggler. The ideal viewpoint for each display was configured based on the expect location for users to be looking at it. Finally, the displays were accurately aligned to the virtual terrain model. This paper describes how the displays were configured in The Veldt, as well as how it was used for two training scenarios.
Virtual reality (VR) applications are used in many areas of academic and industrial research areas including engineering, bio-medical & geo-sciences, among others. These applications generally focus on creating a VR environment to enhance user experience. One of the main challenges VR application developers face is to make objects within the environment move in a natural, realistic manner. Many commercial packages and programming libraries exist to help generate complex animations, including physics engines, game engines and modeling software such as Autodesk Maya. All of these tools are very useful, but have many disadvantages when applied to VR applications, as they were not designed for VR development. To address these issues, a VR application programming interface (API) was developed to help VR developers create and visualize natural, complex animations for VR-based systems utilizing OpenSceneGraph. This API, called the Animation Engine 2.0, was built in a manner animators and developers are already familiar with by integrating control points and keyframes for controlling animations. The system is time-based to scale to any size of VR system, which enabled the ability to support different time interpolations as well to incorporate acceleration into animations to create behavioral events such as a boing, bounce, or surge. In this paper, the Animation Engine API is presented along with its integration into a VR aircraft carrier application.
This paper presents a scenegraph animation application programming interface (API), known as the Animation Engine, which was constructed for software developers to easily perform smooth transitions and manipulations to scenegraph nodes. A developer can use one line of code to enter the property, end state and number of frames to describe the animation, then the Animation Engine handles the rest in the background. The goal of the Animation Engine is to provide a simple API that integrates into existing applications with minimal effort. Additionally, techniques to improve virtual reality (VR) application performance on a large computer cluster are presented. These techniques include maintaining high frame rates with 4096 × 4096 pixel textures, eliminating extraneous network traffic and reducing long model loading time. To demonstrate the Animation Engine and the development techniques, an application known as the Virtual Universe was created. The Virtual Universe, designed to run in a six walled CAVE, allows users to freely explore a set of space themed environments. The architecture and development techniques for writing a stable immersive VR application on a large computer cluster, in addition to the creation of the Animation Engine, is presented in this paper. ABSTRACTThis paper presents the Virtual Universe-a virtual reality (VR) space exploration application designed to take advantage of a high-end VR system including: a high resolution sixwalled display, an eight channel audio system, an IS-900 tracking system, and a 49 node cluster running dual graphics cards. The Virtual Universe is an immersive VR used to explore virtual space. In order to develop such an application to run efficiently on such a massive cluster, a scenegraph animation API-the Animation Engine-was constructed for software developers to smooth transitions and manipulations to scenegraph nodes. Inspired by Core Animation from Apple's Mac OS X Leopard, animation management no longer needs to take place on a frame-to-frame basis. The developer can use one line of code to enter the start state, end state, and number of frames to describe the animation, and the animation engine handles the rest in the background. Additional challenges such as maintaining high frame rates with 4096 x 4096 pixel textures, disabling redundant network traffic, and allowing the file server to leave requested data in cache were overcome. The architecture and development techniques for creating a stable immersive VR application on a high-end VR system is presented in this paper. 1
Conceptual design involves generating hundreds to thousands of concepts and combining the best of all the concepts into a single idea to move forward into detailed design. With the current tools available, design teams usually model a small number of concepts and analyze them using traditional Computer-Aided Design (CAD) analysis tools. The creation and validation of concepts using CAD packages is extremely time consuming and unfortunately, not all concepts can be evaluated. Thus, promising concepts can be eliminated based on insufficient time and resources to use the tools available. Additionally, these virtual models and analyses are usually of much higher fidelity than what is needed at such an early stage of design. To address these issues, an desktop and immersive virtual reality (VR) framework, the Advanced Systems Design Suite (ASDS), was created to foster rapid geometry creation and concept assessment using a unique creation approach which does not require precise mating and dimensioning constraints during the geometry creation phase. The ASDS system removes these precision constraints by using 3D manipulation tools to build concepts and providing a custom easy-to-use measurement system when precise measurements are required. In this paper, the ASDS framework along with a unique and intuitive measurement system are presented for large vehicle conceptual design.
The two most common ways to activate intelligent voice assistants (IVAs) are button presses and trigger phrases. This paper describes a new way to invoke IVAs on smartwatches: simply raise your hand and speak naturally. To achieve this experience, we designed an accurate, low-power detector that works on a wide range of environments and activity scenarios with minimal impact to battery life, memory footprint, and processor utilization. The raise to speak (RTS) detector consists of four main components: an on-device gesture convolutional neural network (CNN) that uses accelerometer data to detect specific poses; an on-device speech CNN to detect proximal human speech; a policy model to combine signals from the motion and speech detector; and an off-device false trigger mitigation (FTM) system to reduce unintentional invocations trigged by the on-device detector. Majority of the components of the detector run on-device to preserve user privacy. The RTS detector was released in watchOS 5.0 and is running on millions of devices worldwide. CCS CONCEPTS • Human-centered computing → Gestural input; • Computing methodologies → Speech recognition; Neural networks; Supervised learning by classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.