Camera trap studies have become a popular medium to assess many ecological phenomena including population dynamics, patterns of biodiversity, and monitoring of endangered species. In conjunction with the benefit to scientists, camera traps present an unprecedented opportunity to involve the public in scientific research via image classifications. However, this engagement strategy comes with a myriad of complications. Volunteers vary in their familiarity with wildlife, thus, the accuracy of user‐derived classifications may be biased by the commonness or popularity of species and user‐experience. From an extensive multi‐site camera trap study across Michigan, U.S.A, we compiled and classified images through a public science platform called Michigan ZoomIN. We aggregated responses from 15 independent users per image using multiple consensus methods to assess accuracy by comparing to species identification completed by wildlife experts. We also evaluated how different factors including consensus algorithms, study area, wildlife species, user support, and camera type influenced the accuracy of user‐derived classifications. Overall accuracy of user‐derived classification was 97%; although, several canid (e.g., Canis lupus, Vulpes vulpes) and mustelid (e.g., Neovison vison) species were repeatedly difficult to identify by users and had lower accuracy. When validating user‐derived classification, we found that study area, consensus method, and user support best explained accuracy. To overcome hesitancy associated with data collected by untrained participants, we demonstrated their value by showing that the accuracy from volunteers was comparable to experts when classifying North American mammals. Our hierarchical workflow that integrated multiple consensus methods led to more image classifications without extensive training and even when the expertise of the volunteer was unknown. Ultimately, adopting such an approach can harness broader participation, expedite future camera trap data synthesis, and improve allocation of resources by scholars to enhance performance of public participants and increase accuracy of user‐derived data. © 2021 The Wildlife Society.
Camera trap studies have become a popular medium to assess many ecological phenomena including population dynamics, patterns of biodiversity, and monitoring of endangered species. In conjunction with the benefit to scientists, camera traps present an unprecedented opportunity to involve the public in scientific research via image classifications. However, this engagement strategy comes with a myriad of complications. Volunteers vary in their familiarity with wildlife, and thus, the accuracy of user-derived classifications may be biased by the commonness or popularity of species and user-experience. From an extensive multisite camera trap study across Michigan U.S.A, images were compiled and identified through a public science platform called Michigan ZoomIN. We aggregated responses from 15 independent users per image using multiple consensus methods to assess accuracy by comparing to species identification completed by wildlife experts. We also evaluated how different factors including consensus algorithms, study area, wildlife species, user support, and camera type influenced the accuracy of user-derived classifications. Overall accuracy of user-derived classification was 97%; although, several canids (e.g., Canis lupus, Vulpes vulpes) and mustelid (e.g., Neovison vison) species were repeatedly difficult to identify by users and had lower accuracy. When validating user-derived classification, we found that study area, consensus method, and user support best explained accuracy. To continue to overcome stigma associated with data from untrained participants, we demonstrated both the contributions and limitations of their capacity. Ultimately, our work elucidated new insights that will harness broader participation, expedite future camera trap data synthesis, and improve allocation of resources by scholars to enhance performance of public participants and increase accuracy of user-derived data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.