Recent work on perceptual learning shows that listenersÕ phonemic representations dynamically adjust to reflect the speech they hear (Norris, McQueen, & Cutler, 2003). We investigate how the perceptual system makes such adjustments, and what (if anything) causes the representations to return to their pre-perceptual learning settings. Listeners are exposed to a speaker whose pronunciation of a particular sound (either /s/ or /S/) is ambiguous (e.g., halfway between /s/ and /S/). After exposure, participants are tested for perceptual learning on two continua that range from /s/ to /S/, one in the Same voice they heard during exposure, and one in a Different voice. To assess how representations revert to their prior settings, half of Experiment 1Õs participants were tested immediately after exposure; the other half performed a 25-min silent intervening task. The perceptual learning effect was actually larger after such a delay, indicating that simply allowing time to pass does not cause learning to fade. The remaining experiments investigate different ways that the system might unlearn a personÕs pronunciations: listeners hear the Same or a Different speaker for 25 min with either: no relevant (i.e., ÔgoodÕ) /s/ or /S/ input (Experiment 2), one of the relevant inputs (Experiment 3), or both relevant inputs (Experiment 4). The results support a view of phonemic representations as dynamic and flexible, and suggest that they interact with both higher-(e.g., lexical) and lower-level (e.g., acoustic) information in important ways.
Phonemic restoration is a powerful auditory illusion in which listeners "hear" parts of words that are not really there. In earlier studies of the illusion, segments of words (phonemes) were replaced by an extraneous sound; listeners were asked whether anything was missing and where the extraneous noise had occurred. Most listeners reported that the utterance was intact and mislocalized the noise, suggesting that they had restored the missing phoneme. In the present study, a second type of stimulus was also presented: items in which the extraneous sound was merely superimposed on the critical phoneme. On each trial, listeners were asked to report whether they thought a stimulus utterance was intact (noise superimposed) or not (noise replacing). Since this procedure yields both a miss rate P(intact/replaced), and a false alarm rate P(replaced/intact), signal detection parameters of discriminability and bias can be calculated. The discriminability parameter reflects how similar the two types of stimuli sound; perceptual restoration of replaced items should make them sound intact, producing low discriminability scores. The bias parameter measures the tendency of listeners to report utterances as intact; it reflects postperceptual decision processes. This improved methodology was used to test the hypothesis that restoration (and more generally, speech perception) depends upon the bottom-up confirmation of expectations generated at higher levels. Perceptual restoration varied greatly wih the phone class of the replaced segment and its acoustic similarity to the replacement sound, supporting a bottom-up component to the illusion. Increasing listeners' expectations of a phoneme increased perceptual restoration: missing segments in words were better restored than corresponding pieces in phonologically legal pseudowords; priming the words produced even more restoration. In contrast, sentential context affected the postperceptual decision stage, biasing listeners to report utterances as intact. A limited interactive model of speech perception, with both bottom-up and top-down components, is used to explain the results.
Lexical context strongly influences listeners' identification of ambiguous sounds. For example, a sound midway between /f/ and /s/ is reported as /f/ in "sheri_," but as /s/ in "Pari_." Norris, McQueen, and Cutler (2003) have demonstrated that after hearing such lexically determined phonemes, listeners expand their phonemic categories to include more ambiguous tokens than before. We tested whether listeners adjust their phonemic categories for a specific speaker. Do listeners learn a particular speaker's "accent"? Similarly, we examined whether perceptual learning is specific to the particular ambiguous phonemes that listeners hear, or whether the adjustments generalize to related sounds. Participants heard ambiguous /d/ or /t/ phonemes during a lexical decision task. They then categorized sounds on /d/-/t/ and /b/-/p/ continua, either in the same voice that they had heard for lexical decision, or in a different voice. Perceptual learning generalized across both speaker and test continua: Changes in perceptual representations are robust and broadly tuned.
People know thousands of words in their native language, and each of these words must be learned at some time in the person's lifetime. A large number of these words will be learned when the person is an adult, reflecting the fact that the mental lexicon is continuously changing. We explore how new words get added to the mental lexicon, and provide empirical support for a theoretical distinction between what we call lexical configuration and lexical engagement. Lexical configuration is the set of factual knowledge associated with a word (e.g., the word's sound, spelling, meaning, or syntactic role). Almost all previous research on word learning has focused on this aspect. However, it is also critical to understand the process by which a word becomes capable of lexical engagement--the ways in which a lexical entry dynamically interacts with other lexical entries, and with sublexical representations. For example, lexical entries compete with each other during word recognition (inhibition within the lexical level), and they also support the activation of their constituents (top-down lexical-phonemic facilitation, and lexically-based perceptual learning). We systematically vary the learning conditions for new words, and use separate measures of lexical configuration and engagement. Several surprising dissociations in behavior demonstrate the importance of the theoretical distinction between configuration and engagement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.