Cortical depth-dependent functional magnetic resonance image (fMRI), also known as layer-fMRI, has the potential to capture directional neural information flow of brain computations within and across large-scale cortical brain networks. E.g., layer-fMRI can differentiate feedforward and feedback cortical input in hierarchically organized brain networks. Recent advancements in 3D-EPI sampling approaches and MR contrast generation strategies have allowed proof-of-principle studies showing that layer-fMRI can provide sufficient data quality for capturing laminar changes in functional connectivity. These studies have however not shown how reliable the signal is and how repeatable the respective results are. It is especially unclear whether whole-brain layer-fMRI functional connectivity protocols are widely applicable across common neuroscience-driven analysis approaches. Moreover, there are no established preprocessing fMRI methods that are optimized to work for whole-brain layer-fMRI datasets. In this work, we aimed to serve the field of layer-fMRI and build tools for future routine whole-brain layer-fMRI in application-based neuroscience research. We have developed publicly available sequences, acquisition protocols, and processing pipelines for whole-brain layer-fMRI. These protocols are validated across 60 hours of scanning in nine participants. Specifically, we identified and exploited methodological advancements for maximizing tSNR efficiency and test-retest reliability. We are sharing an extensive multi-modal whole-brain layer-fMRI dataset (20 scan hours of movie-watching in a single participant) for the purpose of benchmarking future method developments: The Kenshu dataset. With this dataset, we are also exemplifying the usefulness of whole brain layer-fMRI for commonly applied analysis approaches in modern cognitive neuroscience fMRI studies. This includes connectivity analyses, representational similarity matrix estimations, general linear model analyses, principal component analysis clustering, etc. We believe that this work paves the road for future routine measurements of directional functional connectivity across the entire brain.
The human gaze is directed at various locations from moment to moment in acquiring information necessary to recognize the external environment at the fine resolution of foveal vision. Previous studies showed that the human gaze is attracted to particular locations in the visual field at a particular time, but it remains unclear what visual features produce such spatiotemporal bias. In this study, we used a deep convolutional neural network model to extract hierarchical visual features from natural scene images and evaluated how much the human gaze is attracted to the visual features in space and time. Eye movement measurement and visual feature analysis using the deep convolutional neural network model showed that the gaze was more strongly attracted to spatial locations containing higher-order visual features than to locations containing lower-order visual features or to locations predicted by conventional saliency. Analysis of the time course of gaze attraction revealed that the bias to higher-order visual features was prominent within a short period after the beginning of observation of the natural scene images. These results demonstrate that higher-order visual features are a strong gaze attractor in both space and time, suggesting that the human visual system uses foveal vision resources to extract information from higher-order visual features with higher spatiotemporal priority.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.