1. Methods for collecting animal behavior data in natural environments, such as direct observation and bio-logging, are typically limited in spatiotemporal resolution, the number of animals that can be observed, and information about animals' social and physical environments.
2. Video imagery can capture rich information about animals and their environments, but image-based approaches are often impractical due to the challenges of processing large and complex multi-image datasets and transforming resulting data, such as animals' locations, into geographic coordinates.
3. We demonstrate a new system for studying behavior in the wild that uses drone-recorded videos and computer vision approaches to automatically track the location and body posture of free-roaming animals in georeferenced coordinates with high spatiotemporal resolution embedded in contemporaneous 3D landscape models of the surrounding area.
4. We provide two worked examples in which we apply this approach to videos of gelada monkeys and multiple species of group-living African ungulates. We demonstrate how to track multiple animals simultaneously, classify individuals by species and age-sex class, estimate individuals' body postures (poses), and extract environmental features, including topography of the landscape and game trails.
5. By quantifying animal movement and posture, while simultaneously reconstructing a detailed 3D model of the landscape, our approach opens the door to studying the sensory ecology and decision-making of animals within their natural physical and social environments.