The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas—such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research—with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream.
Summary DataLad is a Python-based tool for the joint management of code, data, and their relationship, built on top of a versatile system for data logistics ( git-annex ) and the most popular distributed version control system ( Git ). It adapts principles of open-source software development and distribution to address the technical challenges of data management, data sharing, and digital provenance collection across the life cycle of digital objects. DataLad aims to make data management as easy as managing code. It streamlines procedures to consume, publish, and update data, for data of any size or type, and to link them as precisely versioned, lightweight dependencies. DataLad helps to make science more reproducible and FAIR ( Wilkinson et al., 2016 ). It can capture complete and actionable process provenance of data transformations to enable automatic re-computation. The DataLad project ( datalad.org ) delivers a completely open, pioneering platform for flexible decentralized research data management (RDM) ( Hanke, Pestilli, et al., 2021 ). It features a Python and a command-line interface, an extensible architecture, and does not depend on any centralized services but facilitates interoperability with a plurality of existing tools and services. In order to maximize its utility and target audience, DataLad is available for all major operating systems, and can be integrated into established workflows and environments with minimal friction.
Here we present an extension to the dataset -a versatile studyforrest resource for studying the behavior of the human brain in situations of real-life complexity (). This release adds more high-resolution, http://studyforrest.org ultra high-field (7 Tesla) functional magnetic resonance imaging (fMRI) data from the same individuals. The twenty participants were repeatedly stimulated with a total of 25 music clips, with and without speech content, from five different genres using a slow event-related paradigm. The data release includes raw fMRI data, as well as precomputed structural alignments for within-subject and group analysis. In addition to fMRI, simultaneously recorded cardiac and respiratory traces, as well the complete implementation of the stimulation paradigm, including stimuli, are provided. An initial quality control analysis reveals distinguishable patterns of response to individual genres throughout a large expanse of areas known to be involved in auditory and speech processing. The present data can be used to, for example, generate encoding models for music perception that can be validated against the previously released fMRI data from stimulation with the "Forrest Gump" audio-movie and its rich musical content. In order to facilitate replicative and derived works, only free and open-source software was utilized.
Here we present an annotation of locations and temporal progression depicted in the movie "Forrest Gump", as an addition to a large public functional brain imaging dataset ( ). The annotation provides http://studyforrest.org information about the exact timing of each of the 870 shots, and the depicted location after every cut with a high, medium, and low level of abstraction. Additionally, four classes are used to distinguish the differences of the depicted time between shots. Each shot is also annotated regarding the type of location (interior/exterior) and time of day. This annotation enables further studies of visual perception, memory of locations, and the perception of time under conditions of real-life complexity using the studyforrest dataset.
The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas -such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research -with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants' visual processing stream.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.