If listening in adverse conditions is hard, then listening in a foreign language is doubly so: non-native listeners have to cope with both imperfect signals and imperfect knowledge. Comparison of native and non-native listener performance in speech-in-noise tasks helps to clarify the role of prior linguistic experience in speech perception, and, more directly, contributes to an understanding of the problems faced by language learners in everyday listening situations. This article reviews experimental studies on nonnative listening in adverse conditions, organised around three principal contributory factors: the task facing listeners, the effect of adverse conditions on speech, and the differences among listener populations. Based on a comprehensive tabulation of key studies, we identify robust findings, research trends and gaps in current knowledge.
3
IntroductionFor many of us, the first non-native language experience outside the classroom is a shock. Not only are the answers to the carefully-practised stock phrases not those that appeared in the textbook, but the raw acoustic material reaching our ears lacks the clarity of the speakers in the quiet language laboratory. Thus unprepared, we enter the real world of the non-native listener, characterised by the dual challenges of imperfect signal and imperfect knowledge. And the problem persists even as we gain experience, exposure and confidence in the non-native language. Use of the telephone seems harder than it should be.Conversations in restaurants and bars are difficult to follow and join. The television never seems quite loud enough. We continue to prefer hearing non-native speakers of English at international conferences rather than highly-fluent natives. We finally take solace in the fact that even "true" bilingual listeners never quite reach the ability of monolinguals in the presence of noise (Mayo et al., 1997;Rogers et al., 2006).Knowing about the extent of problems faced by non-native listeners in adverse conditions is important in developing theories of general speech perception. Comparing adult, normal-hearing populations who differ only in their native-language experience has the potential to provide insights into the role of linguistic factors in speech decoding. Since all listeners routinely handle acoustically-complex scenarios containing competing sound sources, reverberant energy and other forms of distortion, the use of native and non-native populations allows us to explore the extent to which linguistic knowledge is used in tasks such as sound source separation and identification. To give an example, consider the processes used by listeners to handle variability in formant frequencies due to factors such as differences in vocal tract sizes. Can vocal tract length normalisation be performed purely on the basis of the speech signal itself, or does it depend on the identification of units such as vowels in the speech stream, thereby engaging higher level representations which differentiate between native and non-native listeners? More ...