In response to reports of inflated false positive rate (FPR) in FMRI group analysis tools, a series of replications, investigations, and software modifications were made to address this issue. While these investigations continue, significant progress has been made to adapt AFNI to fix such problems. Two separate lines of changes have been made. First, a longtailed model for the spatial correlation of the FMRI noise characterized by autocorrelation function (ACF) was developed and implemented into the 3dClustSim tool for determining the clustersize threshold to use for a given voxelwise threshold. Second, the 3dttest++ program was modified to do randomization of the voxelwise t tests and then to feed those randomized t statistic maps into 3dClustSim directly for clustersize threshold determination-without any spatial model for the ACF. These approaches were tested with the Beijing subset of the FCON1000 data collection. The first approach shows markedly improved (reduced) FPR, but in many cases is still above the nominal 5%. The second approach shows FPRs clustered tightly about 5% across all pervoxel p value thresholds ≤ 0.01. If t tests from a univariate GLM are adequate for the group analysis in question, the second approach is what the AFNI group currently recommends for thresholding. If more complex pervoxel statistical analyses are required (where permutation/randomization is impracticable), then our current recommendation is to use the new ACF modeling approach coupled with a pervoxel p threshold of 0.001 or below. Simulations were also repeated with the now infamously "buggy" version of 3dClustSim: the effect of the bug on FPRs was minimal (of order a few percent).
A recent study posted on bioRxiv by Bowring, Maumet and Nichols aimed to compare results of FMRI data that had been processed with three commonly used software packages (AFNI, FSL and SPM). Their stated purpose was to use "default" settings of each software's pipeline for task-based FMRI, and then to quantify overlaps in final clustering results and to measure similarity/dissimilarity in the final outcomes of packages. While in theory the setup sounds simple (implement each package's defaults and compare results), practical realities make this difficult. For example, different softwares would recommend different spatial resolutions of the final data, but for the sake of comparisons, the same value must be used across all. Moreover, we would say that AFNI does not have an explicit default pipeline available: a wide diversity of datasets and study designs are acquired across the neuroimaging community, often requiring bespoke tailoring of basic processing rather than a "one-size-fits-all" pipeline. However, we do have strong recommendations for certain steps, and we are also aware that the choice of a given step might place requirements on other processing steps. Given the very clear reporting of the AFNI pipeline used in Bowring et al. paper, we take this opportunity to comment on some of these aspects of processing with AFNI here, clarifying a few mistakes therein and also offering recommendations. We provide point-by-point considerations of using AFNI's processing pipeline design tool at the individual level, afni_proc.py, along with supplementary programs; while specifically discussed in the context of the present usage, many of these choices may serve as useful starting points for broader processing. It is our intention/hope that the user should examine data quality at every step, and we demonstrate how this is facilitated in AFNI, as well.
Network modeling in neuroimaging holds promise in probing the interrelationships among brain regions and potential clinical applications. Two types of matrix-based analysis (MBA) are usually seen in neuroimaging connectomics: one is the functional attribute matrix (FAM) of, for example, correlations, that measures the similarity of BOLD response patterns among a list of predefined regions of interest (ROIs). Another type of MBA involves the structural attribute matrix (SAM), e.g., describing the properties of white matter between any pair of gray-matter regions such as fractional anisotropy, mean diffusivity, axial and radial diffusivity. There are different methods that have been developed or adopted to summarize such matrices across subjects, including general linear models (GLMs) and various versions of graph theoretic analysis. We argue that these types of modeling strategies tend to be "inefficient" in statistical inferences and have many pitfalls, such as having strong dependence on arbitrary thresholding under conventional statistical frameworks.Here we offer an alternative approach that integrates the analyses of all the regions, region pairs (RPs) and subjects into one framework, called Bayesian multilevel (BML) modeling. In this approach, the intricate relationships across regions as well as across RPs are quantitatively characterized. This integrative approach avoids the multiple testing issue that typically plagues the conventional statistical analysis in neuroimaging, and it provides a principled way to quantify both the effect and its uncertainty at each region as well as for each RP. As a result, a unique feature of BML is that the effect at each region and the corresponding uncertainty can be estimated, revealing the relative strength or importance of each region; in addition, the effect at each RP is obtained along with its uncertainty as statistical evidence. Most importantly, the BML approach can be scrutinized for consistency through validation and comparisons with alternative assumptions or models. We demonstrate the BML methodology with a real dataset with 16 ROIs from 41 subjects, and compare it to the conventional GLM approach in terms of model efficiency, performance and inferences. Furthermore, we emphasize the notion of full results reporting through "highlighting," instead of through the common practice of "hiding." The associated program will be available as part of the AFNI suite for general use.
Neuroimaging relies on separate statistical inferences at tens of thousands of spatial locations. Such massively univariate analysis typically requires adjustment for multiple testing in an attempt to maintain the family-wise error rate at a nominal level of 5%. We discuss how this approach is associated with substantial information loss because of an implicit but questionable assumption about the effect distribution across spatial units. To improve inference efficiency, predictive accuracy, and generalizability, we propose a Bayesian multilevel modeling framework. In addition, we make four actionable suggestions to alleviate information waste and to improve reproducibility: (1) abandon strict dichotomization; (2) report full results; (3) quantify effects, and (4) model data hierarchy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.