Motion‐activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet‐18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out‐of‐sample (or “out‐of‐distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out‐of‐sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out‐of‐sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non‐invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.
Citation: Moeller, A. K., P. M. Lukacs, and J. S. Horne. 2018. Three novel methods to estimate abundance of unmarked animals using remote cameras. Ecosphere 9(8):e02331. 10.1002/ecs2.2331Abstract. Abundance and density estimates are central to the field of ecology and are an important component of wildlife management. While many methods exist to estimate abundance from individually identifiable animals, it is much more difficult to estimate abundance of unmarked animals. One step toward noninvasive abundance estimation is the use of passive detectors such as remote cameras or acoustic recording devices. However, existing methods for estimating abundance from cameras for unmarked animals are limited by variable detection probability and have not taken full advantage of the information in camera trapping rate. We developed a time to event (TTE) model to estimate abundance from trapping rate. This estimate requires independent estimates of animal movement, so we collapsed the sampling occasions to create a space to event (STE) model that is not sensitive to movement rate. We further simplified the STE model into an instantaneous sampling (IS) estimator that applies fixed-area counts to cameras. The STE and IS models utilize time-lapse photographs to eliminate the variability in detection probability that comes with motion-sensor photographs. We evaluated the three methods with simulations and performed a case study to estimate elk (Cervus canadensis) abundance from remote camera trap data in Idaho. Simulations demonstrated that the TTE model is sensitive to movement rate, but the STE and IS methods are unbiased regardless of movement. In our case study, elk abundance estimates were comparable to those from a recent aerial survey in the area, demonstrating that these new methods allow biologists to estimate abundance from unmarked populations without tracking individuals over time.
All rights reserved. No reuse allowed without permission.(which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. from Canada. We also tested the ability of our model to distinguish empty images from those 56 with animals in another out-of-sample dataset from Tanzania, containing a faunal community 57 that was novel to the model. (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint . http://dx.doi.org/10.1101/346809 doi: bioRxiv preprint first posted online Jun. 13, 2018; 3 4. The use of machine learning to rapidly and accurately classify wildlife in camera trap images 66 can facilitate non-invasive sampling designs in ecological studies by reducing the burden of 67 manually analyzing images. We present an R package making these methods accessible to 68 ecologists. We discuss the implications of this technology for ecology and considerations that 69 should be addressed in future implementations of these methods. 70
A suite of recently developed statistical methods to estimate the abundance and density of unmarked animals from camera traps require accurate estimates of the area sampled by each camera. Although viewshed area is fundamental to achieving accurate abundance estimates, there are no established guidelines for collecting this information in the field. Furthermore, while the complexities of the detection process from motion sensor photography are generally acknowledged, viewable area (the common factor between motion sensor and time lapse photography) on its own has been underemphasized. We establish a common set of terminology to identify the component parts of viewshed area, contrast the photographic capture process and area measurements for time lapse and motion sensor photography, and review methods for estimating viewable area in the field. We use a case study to demonstrate the importance of accurate estimates of viewable area on abundance estimates. Time lapse photography combined with accurate measurements of viewable area allow researchers to assume that capture probability equals 1. Motion sensor photography requires measuring distances to each animal and fitting a distance sampling curve to account for capture probability of <1.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.