Tracking statistical regularities of the environment is important for shaping human behavior and perception. Evidence suggests that the brain learns environmental dependencies using Bayesian principles. However, much remains unknown about the employed algorithms, for somesthesis in particular. Here, we describe the cortical dynamics of the somatosensory learning system to investigate both the form of the generative model as well as its neural surprise signatures. Specifically, we recorded EEG data from 40 participants subjected to a somatosensory roving-stimulus paradigm and performed single-trial modeling across peri-stimulus time in both sensor and source space. Our Bayesian model selection procedure indicates that evoked potentials are best described by a non-hierarchical learning model that tracks transitions between observations using leaky integration. From around 70ms post-stimulus onset, secondary somatosensory cortices are found to represent confidence-corrected surprise as a measure of model inadequacy. Indications of Bayesian surprise encoding, reflecting model updating, are found in primary somatosensory cortex from around 140ms. This dissociation is compatible with the idea that early surprise signals may control subsequent model update rates. In sum, our findings support the hypothesis that early somatosensory processing reflects Bayesian perceptual learning and contribute to an understanding of its precise mechanisms.
The human brain is constantly subjected to a multimodal stream of probabilistic sensory inputs. Electroencephalography (EEG) signatures, such as the mismatch negativity (MMN) and the P3, can give valuable insight into neuronal probabilistic inference. Although reported for different modalities, mismatch responses have largely been studied in isolation, with a strong focus on the auditory MMN. To investigate the extent to which early and late mismatch responses across modalities represent comparable signatures of uni‐ and cross‐modal probabilistic inference in the hierarchically structured cortex, we recorded EEG from 32 participants undergoing a novel tri‐modal roving stimulus paradigm. The employed sequences consisted of high and low intensity stimuli in the auditory, somatosensory and visual modalities and were governed by unimodal transition probabilities and cross‐modal conditional dependencies. We found modality specific signatures of MMN (~100–200 ms) in all three modalities, which were source localized to the respective sensory cortices and shared right lateralized prefrontal sources. Additionally, we identified a cross‐modal signature of mismatch processing in the P3a time range (~300–350 ms), for which a common network with frontal dominance was found. Across modalities, the mismatch responses showed highly comparable parametric effects of stimulus train length, which were driven by standard and deviant response modulations in opposite directions. Strikingly, P3a responses across modalities were increased for mispredicted stimuli with low cross‐modal conditional probability, suggesting sensitivity to multimodal (global) predictive sequence properties. Finally, model comparisons indicated that the observed single trial dynamics were best captured by Bayesian learning models tracking unimodal stimulus transitions as well as cross‐modal conditional dependencies.
The human brain is constantly subjected to a multi-modal stream of probabilistic sensory inputs. EEG signatures, such as the mismatch negativity (MMN) and the P3, can give valuable insight into neuronal probabilistic inference. Although reported for different modalities, mismatch responses have largely been studied in isolation, with a strong focus on the auditory MMN. To investigate the extent to which early and late mismatch responses across modalities represent comparable signatures of uni- and cross-modal probabilistic inference in the hierarchically structured cortex, we recorded EEG from 32 participants undergoing a novel tri-modal roving stimulus paradigm. The employed sequences consisted of high and low intensity stimuli in the auditory, somatosensory and visual modalities and were governed by uni-modal transition probabilities and cross-modal conditional dependencies. We found modality specific signatures of MMN (~100-200ms) in all three modalities, which were source localized to the respective sensory cortices and shared right lateralized pre-frontal sources. Additionally, we identified a cross-modal signature of mismatch processing in the P3a time range (~300-350ms), for which a common network with frontal dominance was found. Across modalities, the mismatch responses showed highly comparable parametric effects of stimulus train length, which were driven by standard and deviant response modulations in opposite directions. Strikingly, the P3a responses across modalities were increased for mispredicted compared to predicted and unpredictable stimuli, suggesting sensitivity to cross-modal predictive information. Finally, model comparisons indicated that the observed single trial dynamics were best captured by Bayesian learning models tracking uni-modal stimulus transitions as well as cross-modal conditional dependencies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.