Spatial navigation, active sensing, and most cognitive functions rely on a tight link between motor output and sensory input. Virtual reality (VR) systems simulate the sensorimotor loop, allowing flexible manipulation of enriched sensory input. Conventional rodent VR systems provide 3D visual cues linked to restrained locomotion on a treadmill, leading to a mismatch between visual and most other sensory inputs, sensory-motor conflicts, as well as restricted naturalistic behavior. To rectify these limitations, we developed a VR system (ratCAVE) that provides realistic and low-latency visual feedback directly to head movements of completely unrestrained rodents. Immersed in this VR system, rats displayed naturalistic behavior by spontaneously interacting with and hugging virtual walls, exploring virtual objects, and avoiding virtual cliffs. We further illustrate the effect of ratCAVE-VR manipulation on hippocampal place fields. The newly-developed methodology enables a wide range of experiments involving flexible manipulation of visual feedback in freely-moving behaving animals.Del Grosso et al the multisensory nature of hippocampal spatial representation. This highly-immersive fmVR system can be a powerful tool for a broad range of neuroscience disciplines.
RESULTS
ratCAVE : VR system for freely moving rodentsWe implemented a CAVE system where a VE projection on the surface of the arena was closed-loop coupled with the real-time tracking of the head of the animal. In this setup, animals could move freely in a rectangular arena similar to that used for conventional open-field experiments, but the white-painted arena served as a projection surface. We used an array of 12 high-speed cameras (240-360 fps, NaturalPoint Inc.) to track the 3D position of the rodent's head via a rigid array of retro-reflective spheres attached to a head-mounted 3D-printed skeleton (Fig. 1c,d). This tracking system enabled us to update the rodent's head position with very high spatial (<0.1 mm) and temporal (<2.7 msec) resolution. The VE, created using opensource 3D modeling software (Blender 3D), was rendered each frame in a full 360degree arc about the rodent's head and mapped onto a 3D computer model of the arena using custom Python and OpenGL packages ( Supplementary Fig.3, Online Methods), warped in real-time to generate a fully-interactive, geometrically-accurate 3D scene (Fig. 1b). The core cube-mapping algorithm used to perform the mapping of the VE onto the projection surface was identical to those described in rodent rVR setups ( Supplementary Fig. 2a-c) 23 , but VE projection onto the surface of the arena is continuously updated according to the changing 3D position of the rodent's head ( Fig 1b), resulting in perception of a 3D VE that is stable in the real-world frame of reference that the animal is freely moving about (Fig 1c,d). The resulting image was front-projected onto the floor and slanted walls of the arena from a ceiling-mounted high-speed (240 fps) video projector ( Supplementary Fig. 4). Because the presented virtu...