Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-
low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversaland first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing.
Associating letters with speech sounds is essential for reading skill acquisition. In the current study, we aimed at determining the effects of different types of visual material and temporal synchrony on the integration of letters and speech sounds. To this end, we recorded the mismatch negativity (MMN), an index of automatic change detection in the brain, from literate adults. Subjects were presented with auditory consonant-vowel syllable stimuli together with visual stimuli, which were either written syllables or scrambled pictures of the written syllables. The visual stimuli were presented in half of the blocks synchronously with the auditory stimuli and in the other half 200 ms before the auditory stimuli. The auditory stimuli were consonant, vowel or vowel length changes, or changes in syllable frequency or intensity presented by using the multi-feature paradigm. Changes in the auditory stimuli elicited MMNs in all conditions. MMN amplitudes for the consonant and frequency changes were generally larger for the sounds presented with written syllables than with scrambled syllables. Time delay diminished the MMN amplitude for all deviants. The results suggest that speech sound processing is modulated when the sounds are presented with letters versus non-linguistic visual stimuli, and further, that the integration of letters and speech sounds seems to be dependent on precise temporal alignment. Moreover, the results indicate that with our paradigm, a variety of parameters relevant and irrelevant for reading can be tested within one experiment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.