Ecologists often study wildlife populations by deploying camera traps. Large datasets are generated using this approach which can be difficult for research teams to manually evaluate. Researchers increasingly enlist volunteers from the general public as citizen scientists to help classify images. The growing number of camera trap studies, however, makes it ever more challenging to find enough volunteers to process all projects in a timely manner. Advances in machine learning, especially deep learning, allow for accurate automatic image classification. By training models using existing datasets of images classified by citizen scientists and subsequent application of such models on new studies, human effort may be reduced substantially. The goals of this study were to (a) assess the accuracy of deep learning in classifying camera trap data, (b) investigate how to process datasets with only a few classified images that are generally difficult to model, and (c) apply a trained model on a live online citizen science project. Convolutional neural networks (CNNs) were used to differentiate among images of different animal species, images of humans or vehicles, and empty images (no animals, vehicles, or humans). We used four different camera trap datasets featuring a wide variety of species, different habitats, and a varying number of images. All datasets were labelled by citizen scientists on Zooniverse. Accuracies for identifying empty images across projects ranged between 91.2% and 98.0%, whereas accuracies for identifying specific species were between 88.7% and 92.7%. Transferring information from CNNs trained on large datasets (“transfer‐learning”) was increasingly beneficial as the size of the training dataset decreased and raised accuracy by up to 10.3%. Removing low‐confidence predictions increased model accuracies to the level of citizen scientists. By combining a trained model with classifications from citizen scientists, human effort was reduced by 43% while maintaining overall accuracy for a live experiment running on Zooniverse. Ecology researchers can significantly reduce image classification time and manual effort by combining citizen scientists and CNNs, enabling faster processing of data from large camera trap studies.
Forest-savanna mosaics are maintained by fire-mediated positive feedbacks; whereby forest is fire suppressive and savanna is fire promoting. Forest-savanna transitions therefore represent the interface of opposing fire regimes. Within the transition there is a threshold point at which tree canopy cover becomes sufficiently dense to shade out grasses and thus suppress fire. Prior to reaching this threshold, changes in fire behavior may already be occurring within the savanna. Such changes are neither empirically described nor their drivers understood. Fire behavior is largely driven by fuel flammability. Flammability can vary significantly between grass species and grass species composition can change near forest-savanna transitions. This study measured fire behavior changes at eighteen forest-savanna transition sites in a vegetation mosaic in Lopé National Park in Gabon, central Africa. The extent to which these changes could be attributed to changes in grass flammability was determined using species-specific flammability traits. Results showed simultaneous suppression of fire and grass biomass when tree canopy leaf area index (LAI) reached a value of 3, indicating that a fire suppression threshold existed within the forest-savanna transition. Fires became less intense and less hot prior to reaching this fire suppression threshold. These changes were associated with higher LAI values, which induced a change in the grass community, from one dominated by the highly flammable Anadelphia afzeliana to one dominated by the less flammable Hyparrhenia diplandra. Changes in fire behavior were not associated with changes in total grass biomass. This study demonstrated not only the presence of a fire suppression threshold but the mechanism of its action. Grass composition mediated fire-behavior within the savanna prior to reaching the suppression threshold, and grass species composition was mediated by tree canopy cover which was in turn mediated by fire-behavior. These findings highlight how biotic and abiotic controls interact and amplify each other in this mosaicked landscape to facilitate forest and savanna coexistence .
1. Ecological data are collected over vast geographic areas using digital sensors such as camera traps and bioacoustic recorders. Camera traps have become the standard method for surveying many terrestrial mammals and birds, but camera trap arrays often generate millions of images that are time-consuming to label. This causes significant latency between data collection and subsequent inference, which impedes conservation at a time of ecological crisis. Machine learning algorithms have been developed to improve the speed of labelling camera trap data, but it is uncertain how the outputs of these models can be used in ecological analyses without secondary validation by a human.2. Here, we present our approach to developing, testing and applying a machine learning model to camera trap data for the purpose of achieving fully automated ecological analyses. As a case-study, we built a model to classify 26 Central African forest mammal and bird species (or groups). The model generalizes to new spatially and temporally independent data (n = 227 camera stations, n = 23,868 images), and outperforms humans in several respects (e.g. detecting 'invisible' animals). We demonstrate how ecologists can evaluate a machine learning model's precision and accuracy in an ecological context by comparing species richness, activity patternsThis is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.