A recent monograph by Hurlbert raised several problems concerning the appropriate design of sampling programs to assess the impact upon the abundance of biological populations of, for example, the discharge of effluents into an aquatic ecosystem at a single point. Key to the resolution of these issues is the correct identification of the statistical parameter of interest, which is the mean of the underlying probabilistic "process" that produces the abundance, rather than the actual abundance itself. We describe an appropriate sampling scheme designed to detect the effect of the discharge upon this underlying mean. Although not guaranteed to be universally applicable, the design should meet Hurlbert's objections in many cases. Detection of the effect of the discharge is achieved by testing whether the difference between abundances at a control site and an impact site changes once the discharge begins. This requires taking samples, replicated in time, Before the discharge begins and After it has begun, at both the Control and Impact sites (hence this is called a BACI design). Care needs to be taken in choosing a control site so that it is sufficiently far from the discharge to be largely beyond its influence, yet close enough that it is influenced by the same range of natural phenomena (e.g., weather) that result in long-term changes in the biological populations. The design is not appropriate where local events cause populations at Control and Impact sites to have different long-term trends in abundance; however, these situations can be detected statistically. We discuss the assumptions of BACI, particularly additivity (and transformations to achieve it) and independence.
Quantitative synthesis across studies requires consistent measures of effect size among studies. In community ecology, these measures of effect size will often be some measure of the strength of interactions between taxa. However, indices of interaction strength vary greatly among both theoretical and empirical studies, and the connection between hypotheses about interaction strength and the metrics that are used to test these hypotheses are often not explicit. We describe criteria for choosing appropriate metrics and methods for comparing them among studies at three stages of designing a meta‐analysis to test hypotheses about variation in interaction intensity: (1) the choice of response variable; (2) how effect size is calculated using the response in two treatments; and (3) whether there is a consistent quantitative effect across all taxa and systems studied or only qualitatively similar effects within each taxon–system combination. The consequences of different choices at each of these stages are illustrated with a meta‐analysis to examine the relationship between competition/facilitation intensity and productivity in plants. The analysis used a database of 296 cases in 14 studies. The results were unexpected and largely inconsistent with existing theory: competition intensity often significantly declined (rather than increased) with productivity, and facilitation was sometimes restricted to more productive (rather than less productive) sites. However, there was considerable variation in the pattern among response variables and measures of effect size. For example, on average, competitive effects on final biomass and survival decreased with standing crop, but competitive effects on growth rate did not. On the other hand, facilitative interactions were more common at low standing crop for final biomass and growth rate, but more common at high standing crop for survival. Results were more likely to be significant using the log response ratio (ln[removal/control]) as the effect size than using the relative competition intensity ([removal − control]/removal), although the trends for these conceptually similar indices did not differ. When all studies were grouped in a single meta‐regression of interaction intensity on standing crop to test quantitative similarity among studies, survival showed the clearest negative relationship. However, when the same regressions were done for each unique combination of taxon and site within each study to test for qualitative similarity among studies, the slopes averaged over studies tended to be negative for biomass and growth rate, but not different from zero for survival. These results are subject to a number of caveats because of the limitations of the available data—most notably, the extension of effects of interactions on individual growth or survival to effects on population distribution and abundance or community structure is highly problematic. Nevertheless, the fact that none of the meta‐analyses demonstrated a significant positive relationship between competition and sta...
We compare two approaches to designing and analyzing monitoring studies to assess chronic, local environmental impacts. Intervention Analysis (IA) compares Before and After time series at an Impact site; a special case is Before-After, Control-Impact (BACI), using comparison sites as covariates to reduce extraneous variance and serial correlation. IVRS (impact vs. reference sites) compares Impact and Control sites with respect to Before-After change, treating the sites as experimental units. The IVRS estimate of an ''effect'' is the same as that of the simplest BACI (though not of others), but IVRS estimates error variance by variation among sites, while IA and BACI estimate it by variation over time.These approaches differ in goals, design, and models of the role of chance in determining the data. In IA and BACI, the goal is to determine change at the specific Impact site, so no Controls are needed. IA does not have controls and BACI's are not experimental controls, but covariates, deliberately chosen to be correlated with the Impact site. The goal given for IVRS is to compare hypothetical Impact and Control ''populations,'' so the Controls are essential and are randomly chosen, perhaps with restrictions to make them independent of each other and (presumably) of Impact. IA and BACI inferences are model based: uncertainty arises from sampling error and natural temporal processes causing variation in the variable of concern (e.g., a species' abundance); these processes are modeled as the results of repeatable chance setups. IVRS inferences are design based: uncertainty arises from variation among sites, as well as the other two sources, and is modeled by the assumed random selection of Impact and Control sites, like the drawing of equiprobable numbers from a hat.We outline the formal analyses, showing that IVRS is simpler, and BACI more complex, than usually supposed. We then describe the principles and assumptions of IA and BACI, defining an ''effect'' as the difference between what happened after the impact and what would have happened without it, and stressing the need to justify chance models as reasonable representations of human uncertainty. We respond to comments on BACI, some of which arise from misunderstanding of these principles.IVRS's design-based justification is almost always invalid in real assessments: the Impact site is not chosen randomly. We show that ''as if random'' selection by ''Nature'' is untenable and that an approximation to this, while a possibly useful guide, cannot be used for inference. We argue that, without literal random assignment of treatments to sites, IVRS can only be model based. Its design and analyses will then be different, using and allowing for correlation between sites. It is likely to have low power and requires strong assumptions that are difficult to check, so should be used only when IA or BACI cannot be used, e.g., when there are no Before data.
We address the task of determining the effects, on mean population density or other parameters, of an unreplicated perturbation, such as arises in environmental assessments and some ecosystem—level experiments. Our context is the Before—After—Control—Impact—Pairs design (BACIP): on several dates Before and After the perturbation, samples are collected simultaneously at both the Impact site and a nearby "Control." One approach is to test whether the mean of the Impact—Control difference has changed from Before to After the perturbation. If a conventional test is used, checks of its assumptions are an important and messy part of the analysis, since BACIP data do not necessarily satisfy them. It has been suggested that these checks are not needed for randomization tests, because they are insensitive to some of these assumptions and can be adjusted to allow for others. A major aim of this paper is to refute this suggestion: there is no panacea for the difficult and messy technical problems in the analysis of data from assessments or unreplicated experiments. We compare the randomization t test with the standard t test and the modified (Welch—Satterthwaite—Aspin) t test, which allows for unequal variances. We conclude that the randomization t test is less likely to yield valid inferences than is the Welch t test, because it requires identical distributions for small sample sizes and either equal variances or equal sample sizes for larger ones. The formal requirement of Normality is not crucial to the Welch t test. Both parametric and randomization tests require that time and location effects be additive and the Impact—Control differences on different dates be independent. These assumptions should be tested; if they are seriously wrong, alternative analyses are needed. This will often require a long time series of data. Finally, for assessing the importance of a perturbation, the P value of a hypothesis test is rarely as useful as an estimate of the size of the effect. Especially if effect size varies with time and conditions, flexible estimation methods with approximate answers are preferable to formally exact P values.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.