Multi-echo fMRI, particularly the multi-echo independent component analysis (ME-ICA) algorithm, has previously proven useful for increasing the sensitivity and reducing false positives for functional MRI (fMRI) based resting state connectivity studies. Less is known about its efficacy for task-based fMRI, especially at the single subject level. This work, which focuses exclusively on individual subject results, compares ME-ICA to single-echo fMRI and a voxel-wise T2* weighted combination of multi-echo data for task-based fMRI under the following scenarios: cardiac-gated block designs, constant repetition time (TR) block designs, and constant TR rapid event-related designs. Performance is evaluated primarily in terms of sensitivity (i.e., activation extent, activation magnitude, percent detected trials and effect size estimates) using five different tasks expected to evoke neuronal activity in a distributed set of regions. The ME-ICA algorithm significantly outperformed all other evaluated processing alternatives in all scenarios. Largest improvements were observed for the cardiac-gated dataset, where ME-ICA was able to reliably detect and remove non-neural T1 signal fluctuations caused by non-constant repetition times. Although ME-ICA also outperformed the other options in terms of percent detection of individual trials for rapid event-related experiments, only 46% of all events were detected after ME-ICA; suggesting additional improvements in sensitivity are required to reliably detect individual short event occurrences. We conclude the manuscript with a detailed evaluation of ME-ICA outcomes and a discussion of how the ME-ICA algorithm could be further improved. Overall, our results suggest that ME-ICA constitutes a versatile, powerful approach for advanced denoising of task-based fMRI, not just resting-state data.
Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway.
Introduction To describe the protocol and findings of the instrumental validation of three imaging‐based biomarker kits selected by the MarkVCID consortium: free water (FW) and peak width of skeletonized mean diffusivity (PSMD), both derived from diffusion tensor imaging (DTI), and white matter hyperintensity (WMH) volume derived from fluid attenuation inversion recovery and T1‐weighted imaging. Methods The instrumental validation of imaging‐based biomarker kits included inter‐rater reliability among participating sites, test–retest repeatability, and inter‐scanner reproducibility across three types of magnetic resonance imaging (MRI) scanners using intra‐class correlation coefficients (ICC). Results The three biomarkers demonstrated excellent inter‐rater reliability (ICC >0.94, P ‐values < .001), very high agreement between test and retest sessions (ICC >0.98, P ‐values < .001), and were extremely consistent across the three scanners (ICC >0.98, P ‐values < .001). Discussion The three biomarker kits demonstrated very high inter‐rater reliability, test–retest repeatability, and inter‐scanner reproducibility, offering robust biomarkers suitable for future multi‐site observational studies and clinical trials in the context of vascular cognitive impairment and dementia (VCID).
Animate and inanimate objects differ in their intermediate visual features. For instance, animate objects tend to be more curvilinear compared to inanimate objects (e.g., Levin, Takarae, Miner, & Keil, 2001). Recently, it has been demonstrated that these differences in the intermediate visual features of animate and inanimate objects are sufficient for categorization: Human participants viewing synthesized images of animate and inanimate objects that differ largely in the amount of these visual features classify objects as animate/inanimate significantly above chance (Long, Stormer, & Alvarez, 2017). A remaining question, however, is whether the observed categorization is a consequence of top-down cognitive strategies (e.g., rectangular shapes are less likely to be animals) or a consequence of bottom-up processing of their intermediate visual features, per se, in the absence of top-down cognitive strategies. To address this issue, we repeated the classification experiment of Long et al. (2017) but, unlike Long et al. (2017), matched the synthesized images, on average, in the amount of image-based and perceived curvilinear and rectilinear information. Additionally, in our synthesized images, global shape information was not preserved, and the images appeared as texture patterns. These changes prevented participants from using top-down cognitive strategies to perform the task. During the experiment, participants were presented with these synthesized, texture-like animate and inanimate images and, on each trial, were required to classify them as either animate or inanimate with no feedback given. Participants were told that these synthesized images depicted abstract art patterns. We found that participants still classified the synthesized stimuli significantly above chance even though they were unaware of their classification performance. For both object categories, participants depended more on the curvilinear and less on the rectilinear, image-based information present in the stimuli for classification. Surprisingly, the stimuli most consistently classified as animate were the most dangerous animals in our sample of images. We conclude that bottom-up processing of intermediate features present in the visual input is sufficient for animate/inanimate object categorization and that these features may convey information associated with the affective content of the visual stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.