Immersive environments have become increasingly important in the past decade in areas such as training, data visualization, and even entertainment. While terms such as virtual reality (VR), mixed reality (MR), and augmented reality (AR) are often used to describe different types of immersion, it should be noted that these technologies exist along a continuum defined by level of immersion (Milgram and Kishino 1994). VR is at the most immersive end of this continuum, consisting of experiences where most or all of the user's field of vision is replaced by computer-generated imagery (CGI). At the other end, AR refers to text or images overlaid onto the the user's view of the world, such as the heads-up display of an airplane. The user is immersed only in the real world. Finally, MR is the term used for immersive environments that can't be clearly defined as either AR or VR. An example of MR discussed several times in this article is that of a military training environment consisting of physical sets and props. Displays mounted inside windows and doors show 3D images of the larger, virtual world in which the training takes place. To make MR work, many technologies, such as computer generated (CG) graphics, positional tracking, and input devices are all required. Graphics typically consist of animated 3D models and special effects representative of what a trainee would encounter in an actual situation. They can be front-or rear-projected onto screens or use video displays for presentation to a trainee. Each display can be connected to the primary computer running the simulation, or an application can be clustered, in which multiple computers each generate graphics for connected displays. Position tracking, or just "tracking", is a term for the real-time collection of data specifying the position and ! v orientation of objects involved in a simulation. Tracking data usually describes all six degrees of freedom (DOF) of an object in space, but for certain situations, only three DOF are necessary, such as an object that rests on the ground and only moves along two axes while rotating about a single axis. This data can be acquired in many ways. Trackers can use image analysis, electromagnetic fields, solid state accelerometers and gyroscopes, ultrasonic bursts, or a combination of these to determine an object's position and orientation. Finally, input devices act as a proxy for the real tools a trainee would use on the job. Gamepads, smartphones, and simulated weapons can stand in for switches, control panels, or real weapons. These devices send signals back to the computer(s) running the simulation describing their state (e.g., if a button is pressed) and can be tracked if their position and orientation are relevant to how their real-life counterparts are used. One particular type of simulation paradigm called LVC (Live, Virtual, Constructive), involves live participants interacting with virtual players (i.e. on laptop computers), as well as computer-simulated (constructive) players. When LVC training is combined with VR ...