Warming can impact consumer–resource interactions through multiple mechanisms. For example, warming can both alter the rate at which predators consume prey and the rate prey develop through vulnerable life stages. Thus, the overall effect of warming on consumer–resource interactions will depend upon the strength and asymmetry of warming effects on predator and prey performance. Here, we quantified the temperature dependence of both (a) density‐dependent predation rates for two dragonfly nymph predators on a shared mosquito larval prey, via the functional response, and (b) the development rate of mosquito larval prey to a predator‐invulnerable adult stage. We united the results of these two empirical studies using a temperature‐ and density‐dependent stage‐structured predation model to predict the effects of temperature on the number of larvae that survive to adulthood. Warming accelerated both larval mosquito development and increased dragonfly consumption. Model simulations suggest that differences in the magnitude and rate of predator and prey responses to warming determined the change in magnitude of the overall effect of predation on prey survival to adulthood. Specifically, we found that depending on which predator species prey were exposed to in the model, the net effect of warming was either an overall reduction or no change in predation strength across a temperature gradient. Our results highlight a need for better mechanistic understanding of the differential effects of temperature on consumer–resource pairs to accurately predict how warming affects food web dynamics. A free plain language summary can be found within the Supporting Information of this article.
The type of metric and weighting method used in meta‐analysis can create bias and alter coverage of confidence intervals when the estimated effect size and its weight are correlated. Here, we investigate bias associated with the common metric, Hedges’ d, under conditions common in ecological meta‐analyses. We simulated data from experiments, computed effect sizes and their variances, and performed meta‐analyses applying three weighting schemes (inverse variance, sample size, and unweighted) for varying levels of effect size, within‐study replication, number of studies in the meta‐analysis, and among‐study variance. Unweighted analyses, and those using weights based on sample size, were close to unbiased and yielded coverages close to the nominal level of 0.95. In contrast, the inverse‐variance weighting scheme led to bias and low coverage, especially for meta‐analyses based on studies with low replication. This bias arose because of a correlation between the estimated effect and its weight when using the inverse‐variance method. In many cases, the sample size weighting scheme was most efficient, and, when not, the differences in efficiency among the three methods were relatively minor. Thus, if using Hedges’ d, we recommend using weights based upon sample size that do not involve individual study estimates of the effect size.
1. Despite the wide application of meta-analysis in ecology, some of the traditional methods used for meta-analysis may not perform well given the type of data characteristic of ecological meta-analyses. 2. We reviewed published meta-analyses on the ecological impacts of global climate change, evaluating the number of replicates used in the primary studies (n i) and the number of studies or records (k) that were aggregated to calculate a mean effect size. We used the results of the review in a simulation experiment to assess the performance of conventional frequentist and Bayesian meta-analysis methods for estimating a mean effect size and its uncertainty interval. 3. Our literature review showed that n i and k were highly variable, distributions were right-skewed and were generally small (median n i = 5, median k = 44). Our simulations show that the choice of method for calculating uncertainty intervals was critical for obtaining appropriate coverage (close to the nominal value of 0.95). When k was low (<40), 95% coverage was achieved by a confidence interval (CI) based on the t distribution that uses an adjusted standard error (the Hartung-Knapp-Sidik-Jonkman, HKSJ), or by a Bayesian credible interval, whereas bootstrap or z distribution CIs had lower coverage. Despite the importance of the method to calculate the uncertainty interval, 39% of the meta-analyses reviewed did not report the method used, and of the 61% that did, 94% used a potentially problematic method, which may be a consequence of software defaults. 4. In general, for a simple random-effects meta-analysis, the performance of the best frequentist and Bayesian methods was similar for the same combinations of factors (k and mean replication), though the Bayesian approach had higher than nominal (>95%) coverage for the mean effect when k was very low (k < 15). Our literature review suggests that many meta-analyses that used z distribution or bootstrapping CIs may have overestimated the statistical significance of their results when the number of studies was low; more appropriate methods need to be adopted in ecological meta-analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.