Highlights• The manuscript presents a method to calculate sample sizes for fMRI experiments • The power analysis is based on the estimation of the mixture distribution of null and active peaks • The methodology is validated with simulated and real data.
AbstractMounting evidence over the last few years suggest that published neuroscience research suffer from low power, and especially for published fMRI experiments. Not only does low power decrease the chance of detecting a true effect, it also reduces the chance that a statistically significant result indicates a true effect (Ioannidis, 2005). Put another way, findings with the least power will be the least reproducible, and thus a (prospective) power analysis is a critical component of any paper. In this work we present a simple way to characterize the spatial signal in a fMRI study with just two parameters, and a direct way to estimate these two parameters based on an existing study. Specifically, using just (1) the proportion of the brain activated and (2) the average effect size in activated brain regions, we can produce closed form power calculations for given sample size, brain volume and smoothness. This procedure allows one to minimize the cost of an fMRI experiment, while preserving a predefined statistical power. The method is evaluated and illustrated using simulations and real neuroimaging data from the Human Connectome Project. The procedures presented in this paper are made publicly available in an online web-based toolbox available at www.neuropowertools.org.
Previous research has indicated that stimulus-response mappings that have been instructed but never applied overtly before can lead to automatic response biases when they are irrelevant. In the present study, we investigated whether the same applies to no-go instructions, which relate a stimulus to a no-go response. The results of 2 experiments suggest that a no-go instruction that has never been practiced overtly before can automatically bias responding when it is irrelevant. In addition, the automatic effect of a no-go instruction was similar in size to the automatic effect of a go instruction. Finally, the automatic effect of an unpracticed no-go instruction tended to be larger than the automatic effect of an overtly practiced no-go instruction. We propose that (a) associations between a stimulus and the requirement to stop can be formed on the basis of instructions and without overt practice, (b) these associations may be functionally equivalent to associations formed on the basis of go instructions, and (c) overtly practiced no-go instructions and unpracticed no-go instructions are represented in different formats.
In fMRI research, one often aims to examine activation in specific functional regions of interest (fROIs). Current statistical methods tend to localize fROIs inconsistently, focusing on avoiding detection of false activation. Not missing true activation is however equally important in this context. In this study, we explored the potential of an alternative-based thresholding (ABT) procedure, where evidence against the null hypothesis of no effect and evidence against a prespecified alternative hypothesis is measured to control both false positives and false negatives directly. The procedure was validated in the context of localizer tasks on simulated brain images and using a real data set of 100 runs per subject. Voxels categorized as active with ABT can be confidently included in the definition of the fROI, while inactive voxels can be confidently excluded. Additionally, the ABT method complements classic null hypothesis significance testing with valuable information by making a distinction between voxels that show evidence against both the null and alternative and voxels for which the alternative hypothesis cannot be rejected despite lack of evidence against the null.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.