We take our results to reflect an abstract long-term representation of vowels that do not include redundant specifications at very early stages of processing the speech signal. Moreover, the dipole locations indicate extraction of distinctive features and their mapping onto representationally faithful cortical locations (i.e., a feature map).
Are words stored as morphologically structured representations? If so, when during word recognition are morphological pieces accessed? Recent masked priming studies support models that assume early decomposition of (potentially) morphologically complex words. The electrophysiological evidence, however, is inconsistent. We combined masked morphological priming with magneto-encephalography (MEG), a technique particularly adept at indexing processes involved in lexical access. The latency of an MEG component peaking, on average, 220 msec post-onset of the target in left occipito-temporal brain regions was found to be sensitive to the morphological prime– target relationship under masked priming conditions in a visual lexical decision task. Shorter latencies for related than unrelated conditions were observed both for semantically transparent (cleaner–CLEAN) and opaque (corner–CORN) prime–target pairs, but not for prime–target pairs with only an orthographic relationship (brothel–BROTH). These effects are likely to reflect a prelexical level of processing where form-based representations of stems and affixes are represented and are in contrast to models positing no morphological structure in lexical representations. Moreover, we present data regarding the transitional probability from stem to affix in a post hoc comparison, which suggests that this factor may modulate early morphological decomposition, particularly for opaque words. The timing of a robust MEG component sensitive to the morphological relatedness of prime–target pairs can be used to further understand the neural substrates and the time course of lexical processing.
A long-standing question in speech perception research is how do listeners extract linguistic content from a highly variable acoustic input. In the domain of vowel perception, formant ratios, or the calculation of relative bark differences between vowel formants, have been a sporadically proposed solution. We propose a novel formant ratio algorithm in which the first (F1) and second (F2) formants are compared against the third formant (F3). Results from two magnetoencephelographic (MEG) experiments are presented that suggest auditory cortex is sensitive to formant ratios. Our findings also demonstrate that the perceptual system shows heightened sensitivity to formant ratios for tokens located in more crowded regions of the vowel space. Additionally, we present statistical evidence that this algorithm eliminates speaker-dependent variation based on age and gender from vowel productions. We conclude that these results present an impetus to reconsider formant ratios as a legitimate mechanistic component in the solution to the problem of speaker normalization.
While previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in ‘bet’) exert less top-down effects than the high-vowels (as in ‘bit’) because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in ‘bat’). We tested this assumption in a Magnetoencephalographic (MEG) study where we contrasted mid- and high-vowels, as well as the low- and high-vowels in a passive oddball paradigm. Overall, significant differences between deviants and standards indexed reliable mismatch-negativity (MMN) responses between 200 and 300 ms post stimulus onset. MMN amplitudes differed in the mid/high-vowel contrasts and were significantly reduced when a mid-vowel standard was followed by a high-vowel deviant, extending previous findings. Furthermore, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18–26 Hz), compared to high-vowel standards. We take this as converging evidence for linguistic category structure to exert top-down influences on auditory processing. The findings are interpreted within the linguistic model of underspecification and the neuropsychological predictive coding framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.