Humans have a striking ability to infer meaning from even the sparsest and most abstract forms of narratives. At the same time, flexibility in the form of a narrative is matched by inherent ambiguity in its interpretation. How does the brain represent subtle, idiosyncratic differences in the interpretation of abstract and ambiguous narratives? In this fMRI study, subjects were scanned either watching a novel 7-min animation depicting a complex narrative through the movement of geometric shapes, or listening to a narration of the animation's social story. Using an intersubject representational similarity analysis that compared interpretation similarity and neural similarity across subjects, we found that the more similar two people's interpretations of the abstract shapes animation were, the more similar were their neural responses in regions of the default mode network (DMN) and fronto-parietal network. Moreover, these shared responses were modality invariant: the shapes movie and the verbal interpretation of the movie elicited shared responses in linguistic areas and a subset of the DMN when subjects shared interpretations. Together, these results suggest a network of high-level regions that are not only sensitive to subtle individual differences in narrative interpretation during naturalistic conditions, but also resilient to large differences in the modality of the narrative.
Small changes in word choice can lead to dramatically different interpretations of narratives. How does the brain accumulate and integrate such local changes to construct unique neural representations for different stories? In this study, we created two distinct narratives by changing only a few words in each sentence (e.g., "he" to "she" or "sobbing" to "laughing") while preserving the grammatical structure across stories. We then measured changes in neural responses between the two stories. We found that differences in neural responses between the two stories gradually increased along the hierarchy of processing timescales. For areas with short integration windows, such as early auditory cortex, the differences in neural responses between the two stories were relatively small. In contrast, in areas with the longest integration windows at the top of the hierarchy, such as the precuneus, temporal parietal junction, and medial frontal cortices, there were large differences in neural responses between stories. Furthermore, this gradual increase in neural differences between the stories was highly correlated with an area's ability to integrate information over time. Amplification of neural differences did not occur when changes in words did not alter the interpretation of the story (e.g., sobbing to "crying"). Our results demonstrate how subtle differences in words are gradually accumulated and amplified along the cortical hierarchy as the brain constructs a narrative over time.timescale | amplification | fMRI | narrative | hierarchy S tories unfold over many minutes and are organized into temporarily nested structures: Paragraphs are made of sentences, which are made of words, which are made of phonemes. Understanding a story therefore requires processing the story at multiple timescales, starting with the integration of phonemes to words within a relatively short temporal window, to the integration of words to sentences across a longer temporal window of a few seconds, and up to the integration of sentences and paragraphs into a coherent narrative over many minutes. It was recently suggested that the temporally nested structure of language is processed hierarchically along the cortical surface (1-4). A consequence of the hierarchical structure of language is that small changes in word choice can give rise to large differences in sentence and overall narrative interpretation. Here, we propose that local momentary changes in linguistic input in the context of a narrative (e.g., "he" to "she" or "sobbing" to "laughing") are accumulated and amplified during the processing of linguistic content across the cortex. Specifically, we hypothesize that as areas in the brain increase in their ability to integrate information over time (e.g., processing "word" level content vs. "sentence" or "narrative" level content), the neural response to small word changes will become increasingly divergent.Previously, we defined a temporal receptive window (TRW) as the length of time during which prior information from an ongoing stimulu...
A strong relationship between cortical folding and the location of primary sensory areas in the human brain is well established. However, it is unknown if coupling between functional responses and gross anatomy is found at higher stages of sensory processing. We examined the relationship between cortical folding and the location of the retinotopic maps hV4 and VO1, which are intermediate stages in the human ventral visual processing stream. Our data show a consistent arrangement of the eccentricity maps within hV4 and VO1 with respect to anatomy, with the consequence that the hV4/VO1 boundary is found consistently in the posterior transverse collateral sulcus (ptCoS) despite individual variability in map size and cortical folding. Understanding this relationship allowed us to predict the location of visual areas hV4 and VO1 in a separate set of individuals, using only their anatomies, with >85% accuracy. These findings have important implications for understanding the relation between cortical folding and functional maps as well as for defining visual areas from anatomical landmarks alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.