TABLE SDC1. Details of records included in the meta-analysis broken down by relevant component studies. Author(s) and year Study n Group Age * (years †) Onset of deafness Age at CI activation (years †) Duration of CI use (years †) Prosody Language Stimuli Cues AFC Measure Comment f0 int dur Agrawal et al. (2012)
When we speak, we can vary how we use our voices. Our speech can be high or low (pitch), loud or soft (loudness), and fast or slow (duration). This variation in pitch, loudness, and duration is called speech prosody. It is a bit like making music. Varying our voices when we speak can express sarcasm or emotion and can even change the meaning of what we are saying. So, speech prosody is a crucial part of spoken language. But how do speakers produce prosody? How do listeners hear and understand these variations? Is it possible to hear and interpret prosody in other languages? And what about people whose hearing is not so good? Can they hear and understand prosodic patterns at all? Let’s find out!
This study assesses how a cochlear implant (CI) simulation influences the interpretation of prosodically marked linguistic focus in a non-native language. In an online experiment, two groups of normal-hearing native Dutch learners of English of different ages (12-14 year-old adolescents vs. 18+ year-old adults) and with different proficiency levels in English (A2 vs. B2/C1) were asked to listen to CI-simulated and non-CIsimulated English sentences differing in prosodically marked focus and indicate which of four possible context questions the speaker answered. Results show that, as expected, focus interpretation is significantly less accurate in the CI-simulated condition compared to the non-CI-simulated condition and that more proficient non-native listeners outperform less proficient non-native listeners. However, there was no interaction between the influence of the spectro-temporal degradation of the CIsimulated speech signal and that of the English proficiency level of the non-native listeners, suggesting that less proficient nonnative listeners are not more strongly affected by the spectrotemporal degradation of an electric speech signal than more proficient non-native listeners.
Speakers can use prosodic cues to direct listeners to a specific part of an utterance. The prosodically emphasised part has linguistic focus, determined by the semantic and pragmatic context (e.g., Cole, 2015). For cochlear implant (CI) users, processing prosodically marked focus can be challenging given the degradation in fine spectrotemporal detail of the signal transmitted through the device (e.g., Başkent et al., 2016). An additional challenge can be expected for CI users listening to a non-native language. In this ongoing study, we investigate how native Dutch learners of English process prosodically marked focus in English sentences degraded by a CI simulation compared to how they process it in non-CI-simulated stimuli. These results are compared to those of native English listeners. Listeners are presented with English sentences differing in prosodically marked sentential focus and are instructed to indicate which of four possible context questions prompted the response stimulus. We expect that listeners are less accurate and less efficient for the CI-simulated stimuli compared to the non-CI-simulated stimuli and that non-native listeners are less efficient than native listeners, underlining the challenges of prosodically marked sentential focus processing in a non-native language with CI hearing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.