Photo-identification is an invaluable method for documenting associations. Based on the assumption that individuals photographed close together in time are physically close in space, the metadata associated with digital photography offers an opportunity to base association analyses on time between images. This was tested via analysis of associations within a population of bottlenose dolphins (Tursiops truncatus) in Doubtful Sound, New Zealand. We compared the widely used group-membership method and an alternative time-based method. Overall social structures between methods were similar; high degrees of association among all individuals and little support for sub-groups. Results also indicated an increase in the precision of pairwise indices for the time-based method. This study validated the approach of using time as a basis for analyses of associations. Importantly, this method can be retrospectively applied to any photo-ID data set in which images of uniquely identifiable individuals are time-stamped by the camera.
A small population of approximately 68 bottlenose dolphins, Tursiops truncatus, resident in Doubtful Sound, New Zealand, is subject to physiologically challenging conditions, and is exposed to anthropogenic pressure from tourism. A voluntary Code of Management incorporating dolphin protection zones (DPZs), in which tour boat access is limited, was established in 2008. Kernel density estimation (KDE) was used to quantify dolphin habitat use over a 13‐year period in order to describe seasonal variation in habitat use and consistency of habitat use over a decadal period, and to provide quantitative estimates of the extent of overlap between DPZs and core areas (50% volume contour) of habitat use. Habitat use varied seasonally, with the inner fjord area used more frequently in warmer months, and with a shift in use to the outer fjord in colder months. Patterns in habitat use were highly consistent over the 13‐year duration of the study. The spatial overlap between the area of core dolphin habitat and DPZs was low (<18%) overall, and some DPZs were rarely used during colder periods. Consistency in habitat use through time vindicates spatial management, but low overlap between core habitat and current DPZs suggests that an expansion of the DPZ areas would confer greater protection.
Photo identification is an important tool in the conservation management of endangered species, and recent developments in artificial intelligence are revolutionizing existing workflows to identify individual animals. In 2015, the National Oceanic and Atmospheric Administration hosted a Kaggle data science competition to automate the identification of endangered North Atlantic right whales (Eubalaena glacialis). The winning algorithms developed by Deepsense.ai were able to identify individuals with 87% accuracy using a series of convolutional neural networks to identify the region of interest, create standardized photographs of uniform size and orientation, and then identify the correct individual. Since that time, we have brought in many more collaborators as we moved from prototype to production. Leveraging the existing infrastructure by Wild Me, the developers of Flukebook, we have created a web-based platform that allows biologists with no machine learning expertise to utilize semi-automated photo identification of right whales. New models were generated on an updated dataset using the winning Deepsense.ai algorithms. Given the morphological similarity between the North Atlantic right whale and closely related southern right whale (Eubalaena australis), we expanded the system to incorporate the largest long-term photo identification catalogs around the world including the United States, Canada, Australia, South Africa, Argentina, Brazil, and New Zealand. The system is now fully operational with multi-feature matching for both North Atlantic right whales and southern right whales from aerial photos of their heads (Deepsense), lateral photos of their heads (Pose Invariant Embeddings), flukes (CurvRank v2), and peduncle scarring (HotSpotter). We hope to encourage researchers to embrace both broad data collaborations and artificial intelligence to increase our understanding of wild populations and aid conservation efforts.
Researchers can investigate many aspects of animal ecology through noninvasive photo–identification. Photo–identification is becoming more efficient as matching individuals between photos is increasingly automated. However, the convolutional neural network models that have facilitated this change need many training images to generalize well. As a result, they have often been developed for individual species that meet this threshold. These single‐species methods might underperform, as they ignore potential similarities in identifying characteristics and the photo–identification process among species. In this paper, we introduce a multi‐species photo–identification model based on a state‐of‐the‐art method in human facial recognition, the ArcFace classification head. Our model uses two such heads to jointly classify species and identities, allowing species to share information and parameters within the network. As a demonstration, we trained this model with 50,796 images from 39 catalogues of 24 cetacean species, evaluating its predictive performance on 21,192 test images from the same catalogues. We further evaluated its predictive performance with two external catalogues entirely composed of identities that the model did not see during training. The model achieved a mean average precision (MAP) of 0.869 on the test set. Of these, 10 catalogues representing seven species achieved a MAP score over 0.95. For some species, there was notable variation in performance among catalogues, largely explained by variation in photo quality. Finally, the model appeared to generalize well, with the two external catalogues scoring similarly to their species' counterparts in the larger test set. From our cetacean application, we provide a list of recommendations for potential users of this model, focusing on those with cetacean photo–identification catalogues. For example, users with high quality images of animals identified by dorsal nicks and notches should expect near optimal performance. Users can expect decreasing performance for catalogues with higher proportions of indistinct individuals or poor quality photos. Finally, we note that this model is currently freely available as code in a GitHub repository and as a graphical user interface, with additional functionality for collaborative data management, via Happywhale.com.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.