Strategic conservation efforts for cryptic species, especially bats, are hindered by limited understanding of distribution and population trends. Integrating long‐term encounter surveys with multi‐season occupancy models provides a solution whereby inferences about changing occupancy probabilities and latent changes in abundance can be supported. When harnessed to a Bayesian inferential paradigm, this modeling framework offers flexibility for conservation programs that need to update prior model‐based understanding about at‐risk species with new data. This scenario is exemplified by a bat monitoring program in the Pacific Northwestern United States in which results from 8 years of surveys from 2003 to 2010 require updating with new data from 2016 to 2018. The new data were collected after the arrival of bat white‐nose syndrome and expansion of wind power generation, stressors expected to cause population declines in at least two vulnerable species, little brown bat (Myotis lucifugus) and the hoary bat (Lasiurus cinereus). We used multi‐season occupancy models with empirically informed prior distributions drawn from previous occupancy results (2003–2010) to assess evidence of contemporary decline in these two species. Empirically informed priors provided the bridge across the two monitoring periods and increased precision of parameter posterior distributions, but did not alter inferences relative to use of vague priors. We found evidence of region‐wide summertime decline for the hoary bat (trueλ^ = 0.86 ± 0.10) since 2010, but no evidence of decline for the little brown bat (trueλ^ = 1.1 ± 0.10). White‐nose syndrome was documented in the region in 2016 and may not yet have caused regional impact to the little brown bat. However, our discovery of hoary bat decline is consistent with the hypothesis that the longer duration and greater geographic extent of the wind energy stressor (collision and barotrauma) have impacted the species. These hypotheses can be evaluated and updated over time within our framework of pre–post impact monitoring and modeling. Our approach provides the foundation for a strategic evidence‐based conservation system and contributes to a growing preponderance of evidence from multiple lines of inquiry that bat species are declining.
Model choice is usually an inevitable source of uncertainty in model-based statistical analyses. While the focus of model choice was traditionally on methods for choosing a single model, methods to formally account for multiple models within a single analysis are now accessible to many researchers. The specific technique of model averaging was developed to improve predictive ability by combining predictions from a set of models. However, it is now often used to average regression coefficients across multiple models with the ultimate goal of capturing a variable's overall effect. This use of model averaging implicitly assumes the same parameter exists across models so that averaging is sensible. While this assumption may initially seem tenable, regression coefficients associated with particular explanatory variables may not hold equivalent interpretations across all of the models in which they appear, making explanatory inference about covariates challenging. Accessibility to easily implementable software, concerns about being criticized for ignoring model uncertainty, and the chance to avoid having to justify choice of a final model have all led to the increasing popularity of model averaging in practice. We see a gap between the theoretical development of model averaging and its current use in practice, potentially leaving well-intentioned researchers with unclear inferences or difficulties justifying reasons for using (or not using) model averaging. We attempt to narrow this gap by revisiting some relevant foundations of regression modeling, suggesting more explicit notation and graphical tools, and discussing how individual model results are combined to obtain a model averaged result. Our goal is to help researchers make informed decisions about model averaging and to encourage question-focused modeling over method-focused modeling.
Bayesian data analysis (BDA) is now broadly acknowledged as an invaluable tool for modelling ecological data because of its capability to easily account for hierarchical structure, as well as observation and process uncertainties inherent in ecological systems
Acoustic recording units (ARUs) enable geographically extensive surveys of sensitive and elusive species. However, a hidden cost of using ARU data for modeling species occupancy is that prohibitive amounts of human verification may be required to correct species identifications made from automated software. Bat acoustic studies exemplify this challenge because large volumes of echolocation calls could be recorded and automatically classified to species. The standard occupancy model requires aggregating verified recordings to construct confirmed detection/non‐detection datasets. The multistep data processing workflow is not necessarily transparent nor consistent among studies. We share a workflow diagramming strategy that could provide coherency among practitioners. A false‐positive occupancy model is explored that accounts for misclassification errors and enables potential reduction in the number of confirmed detections. Simulations informed by real data were used to evaluate how much confirmation effort could be reduced without sacrificing site occupancy and detection error estimator bias and precision. We found even under a 50% reduction in total confirmation effort, estimator properties were reasonable for our assumed survey design, species‐specific parameter values, and desired precision. For transferability, a fully documented r package, , for implementing a false‐positive occupancy model is provided. Practitioners can apply to optimize their own study design (required sample sizes, number of visits, and confirmation scenarios) for properly implementing a false‐positive occupancy model with bat or other wildlife acoustic data. Additionally, our work highlights the importance of clearly defining research objectives and data processing strategies at the outset to align the study design with desired statistical inferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.