Accurate tracking and analysis of animal behavior is crucial for modern systems neuroscience. Animals can be easily monitored in confined, well-lit spaces or virtual-reality setups. However, tracking freely moving behavior through naturalistic, three-dimensional (3D) environments remains a major challenge. A closed-loop control that provides behavior-triggered stimuli and thus structures a behavioral task, is also more complicated in free-range settings. Here, we present EthoLoop: a framework for studying the neuroethology of freely roaming animals, including examples with rodents and primates. Combining real-time optical tracking, "on the fly" behavioral analysis with remote-controlled stimulus-reward boxes, allows us to directly interact with free-ranging animals in their habitat. Assembled with off-the-shelf and wireless hardware, we show that this closed-loop optical tracking system can be used to follow the 3D spatial position of multiple subjects in real time, continuously provide close-up views, condition behavioral patterns detected online with deep learning methods and be synchronized with wirelessly acquired neuronal recordings or with optogenetic feedback. Reward or stimulus feedback is provided by battery-powered and remote-controlled boxes that communicate with the tracking system and can be distributed at multiple locations in the environment. The EthoLoop framework enables a new generation of interactive, but well-controlled and reproducible neuroethological studies in large-field naturalistic settings.
Highlights d Mouse lemur V1 possesses orientation preference maps with pinwheel arrangement d The size and statistics of mouse lemur V1 pinwheels are comparable to the macaque d Orientation preference columns only weakly scale with body size in primates
The rodent visual system has attracted great interest in recent years due to its experimental tractability, but the fundamental mechanisms used by the mouse to represent the visual world remain unclear. In the primate, researchers have argued from both behavioral and neural evidence that a key step in visual representation is 'figure-ground segmentation', the delineation of figures as distinct from backgrounds (Nakayama, He, and Shimojo 1995; Lamme 1995; Poort et al. 2012; Qiu and Heydt 2005). To determine if mice also show behavioral and neural signatures of figure-ground segmentation, we trained mice on a figure-ground segmentation task where figures were defined by gratings and naturalistic textures moving counterphase to the background. Unlike primates, mice were severely limited in their ability to segment figure from ground using the opponent motion cue, with segmentation behavior strongly dependent on the specific carrier pattern. Remarkably, when mice were forced to localize naturalistic patterns defined by opponent motion, they adopted a strategy of brute force memorization of texture patterns. In contrast, primates, including humans, macaques, and mouse lemurs, could readily segment figures independent of carrier pattern using the opponent motion cue. Consistent with mouse behavior, neural responses to the same stimuli recorded in mouse visual areas V1, RL, and LM also did not support texture-invariant segmentation of figures using opponent motion. Modeling revealed that the texture dependence of both the mouse's behavior and neural responses could be explained by a feedforward neural network lacking explicit segmentation capabilities. These findings reveal a fundamental limitation in the ability of mice to segment visual objects compared to primates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.