Subject motion can introduce noise into neuroimaging data and result in biased estimations of brain structure. In-scanner motion can compromise data quality in a number of ways and varies widely across developmental and clinical populations. However, quantification of structural image quality is often limited to proxy or indirect measures gathered from functional scans; this may be missing true differences related to these potential artifacts. In this study, we take advantage of novel informatic tools, the CAT12 toolbox, to more directly measure image quality from T1-weighted images to understand if these measures of image quality: (1) relate to rigorous quality-control checks visually completed by human raters; (2) are associated with sociodemographic variables of interest; (3) influence regional estimates of cortical surface area, cortical thickness, and subcortical volumes from the commonly used Freesurfer tool suite. We leverage public-access data that includes a community-based sample of children and adolescents, spanning a large age-range (N = 388; ages 5–21). Interestingly, even after visually inspecting our data, we find image quality significantly impacts derived cortical surface area, cortical thickness, and subcortical volumes from multiple regions across the brain (~ 23.4% of all areas investigated). We believe these results are important for research groups completing structural MRI studies using Freesurfer or other morphometric tools. As such, future studies should consider using measures of image quality to minimize the influence of this potential confound in group comparisons or studies focused on individual differences.
On-going, large-scale neuroimaging initiatives have produced many MRI datasets with hundreds, even thousands, of individual participants and scans. These databases can aid in uncovering neurobiological causes and correlates of poor mental health, disease pathology, and many other important factors. While volumetric quantification of brain structures can be completed by expert handtracing, automated segmentations are becoming the only truly tractable approach for particularly large datasets. Here, we assessed the spatial and numerical reliability for newly-deployed automated segmentation of hippocampal subfields and amygdala nuclei in FreeSurfer. In a sample of participants with repeated structural imaging scans (N=118), we found numerical reliability (as assessed by intraclass correlations) to be generally high, with 92% of the subregions having ICCs above 0.90 and the remainder still above 0.75. Spatial reliability was lower with only 11% of regions having Dice coefficients above 0.90, but 70% with Dice coefficients above 0.75. Of particular concern, three regions, the hippocampal fissure, the anterior amygdaloid area, and the paralaminar nucleus, had only moderate spatial reliability (0.50-0.75). We also examined correlations between spatial reliability and person-level factors (e.g., age, inter-scan interval, and difference in image quality). For these factors, interscan interval and image quality were related to variations in spatial reliability. Examined collectively, our work suggests strong numerical and spatial reliability for the majority of hippocampal and amygdala subdivisions; however, caution should be exercised for a few regions with more variable reliability.
In-scanner head movements can introduce artifacts to MRI images and increase errors in brainbehavior studies. The magnitude of in-scanner head movements varies widely across developmental and clinical samples, making it increasingly difficult to parse out "true signal" from motion related noise. Yet, the quantification of structural imaging quality is typically limited to subjective visual assessments and/or proxy measures of motion. It is, however, unknown how direct measures of image quality relate to developmental and behavioral variables, as well as measures of brain morphometrics. To begin to answer this question, we leverage a multi-site dataset of structural MRI images, which includes a range of children and adolescents with varying degrees of psychopathology. We first find that a composite of structural image quality relates to important developmental and behavioral variables (e.g., IQ; clinical diagnoses).Additionally, we demonstrate that even among T1-weighted images which pass visual inspection, variations in image quality impact volumetric derivations of regional gray matter.Image quality was associated with wide-spread variations in gray matter, including in portions of the frontal, parietal, and temporal lobes, as well as the cerebellum. Further, our image quality composite partially mediated the relationship between age and total gray matter volume, explaining 23% of this relationship. Collectively, the effects underscore the need for volumetric studies to model or mitigate the effect of image quality when investigating brain-behavior relations.
Abuse, neglect, exposure to violence, and other forms of early life adversity (ELA) are incredibly common and significantly impact physical and mental development. While important progress has been made in understanding the impacts of ELA on behavior and the brain, the preponderance of past work has primarily centered on threat processing and vigilance while ignoring other potentially critical neurobehavioral processes, such as reward-responsiveness and learning. To advance our understanding of potential mechanisms linking ELA and poor mental health, we center in on structural connectivity of the corticostriatal circuit, specifically accumbofrontal white matter tracts. Here, in a sample of 77 youth (Mean age = 181 months), we leveraged rigorous measures of ELA, strong diffusion neuroimaging methodology, and computational modeling of reward learning. Linking these different forms of data, we hypothesized that higher ELA would be related to lower quantitative anisotropy in accumbofrontal white matter. Furthermore, we predicted that lower accumbofrontal quantitative anisotropy would be related to differences in reward learning. Our primary predictions were confirmed, but similar patterns were not seen in control white matter tracts outside of the corticostriatal circuit. Examined collectively, our work is one of the first projects to connect ELA to neural and behavioral alterations in reward-learning, a critical potential mechanism linking adversity to later developmental challenges. This could potentially provide windows of opportunity to address the effects of ELA through interventions and preventative programming.
On-going, large-scale neuroimaging initiatives can aid in uncovering neurobiological causes and correlates of poor mental health, disease pathology, and many other important conditions. As projects grow in scale with hundreds, even thousands, of individual participants and scans collected, quantification of brain structures by automated algorithms is becoming the only truly tractable approach. Here, we assessed the spatial and numerical reliability for newly deployed automated segmentation of hippocampal subfields and amygdala nuclei in FreeSurfer 7. In a sample of participants with repeated structural imaging scans (N = 928), we found numerical reliability (as assessed by intraclass correlations, ICCs) was reasonable. Approximately 95% of hippocampal subfields had “excellent” numerical reliability (ICCs ≥ 0.90), while only 67% of amygdala subnuclei met this same threshold. In terms of spatial reliability, 58% of hippocampal subfields and 44% of amygdala subnuclei had Dice coefficients ≥ 0.70. Notably, multiple regions had poor numerical and/or spatial reliability. We also examined correlations between spatial reliability and person-level factors (e.g., participant age; T1 image quality). Both sex and image scan quality were related to variations in spatial reliability metrics. Examined collectively, our work suggests caution should be exercised for a few hippocampal subfields and amygdala nuclei with more variable reliability. Graphical Abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.