Speech Prosody 2022 2022
DOI: 10.21437/speechprosody.2022-15
|View full text |Cite
|
Sign up to set email alerts
|

Interpretation of prosodically marked focus in cochlear implant-simulated speech by non-native listeners

Abstract: This study assesses how a cochlear implant (CI) simulation influences the interpretation of prosodically marked linguistic focus in a non-native language. In an online experiment, two groups of normal-hearing native Dutch learners of English of different ages (12-14 year-old adolescents vs. 18+ year-old adults) and with different proficiency levels in English (A2 vs. B2/C1) were asked to listen to CI-simulated and non-CIsimulated English sentences differing in prosodically marked focus and indicate which of fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…Therefore, the loss of voice pitch information results in significant deficits in CI users' perception of vocal emotions (Luo et al 2007;Hopyan-Misakyan et al 2009;Chatterjee et al 2015;Tinnemore et al 2018;Barrett et al 2020). Consistent with findings in CI listeners, identification of emotional prosody is also impaired in normally hearing listeners subjected to CI-simulated, or vocoded, speech (Shannon et al 1995;Chatterjee et al 2015;Tinnemore et al 2018;Ritter & Vongpaisal 2018;Everhardt et al 2020). Despite CI users' limitations in pitch perception and in identification of prosodic cues, the average CI user shows excellent sentence recognition with high-context materials in quiet environments (James et al 2019).…”
Section: Introductionmentioning
confidence: 60%
“…Therefore, the loss of voice pitch information results in significant deficits in CI users' perception of vocal emotions (Luo et al 2007;Hopyan-Misakyan et al 2009;Chatterjee et al 2015;Tinnemore et al 2018;Barrett et al 2020). Consistent with findings in CI listeners, identification of emotional prosody is also impaired in normally hearing listeners subjected to CI-simulated, or vocoded, speech (Shannon et al 1995;Chatterjee et al 2015;Tinnemore et al 2018;Ritter & Vongpaisal 2018;Everhardt et al 2020). Despite CI users' limitations in pitch perception and in identification of prosodic cues, the average CI user shows excellent sentence recognition with high-context materials in quiet environments (James et al 2019).…”
Section: Introductionmentioning
confidence: 60%
“…Previous investigations of cue-weighting in vocal emotions either attenuated variations in individual acoustic features to observe how reducing the information provided by cues impairs accuracy (Gilbers et al, 2015;Luo et al, 2007) or attenuated variations in pairs of acoustic features to quantify listeners' abilities to make use of individual acoustic features providing potentially informative cues. Specifically, they provide preliminary evidence that when F0 information is missing, listeners' may be able to use intensity and/or speech rate cues to glean emotional meaning (e.g., Everhardt et al, 2020;Hegarty & Faulkner, 2013;Metcalfe, 2017), and that intensity cues may be more reliable than speech-rate cues in doing so (e.g., Marx et al, 2015;Peng et al, 2012). Based on these reports, we hypothesise that the accuracy with which vocal emotions are identified is reduced as potential cues, ordered from least to most impactful, are rendered uninformative, that is, intensity and speech-rate cues combined, then F0 cues alone, followed by F0 and intensity cues combined and, finally, F0 and speech-rate cues combined.…”
Section: Attenuating Variations In F0 Reduces Accuracy With Which Voc...mentioning
confidence: 98%
“…Consistent with the degradation of the speech signal being less severe in hearing with HAs compared to with CIs, HA users tend to show a milder reduction in accuracy when recognising vocal emotion (Most & Aviner, 2009). While increasing evidence demonstrates that HA and CI listeners have difficulty recognising vocal emotions due to degraded F0 information (e.g., Everhardt et al, 2020;Goy et al, 2018;Most & Aviner, 2009;Waaramaa, Kukkonen, Mykkänen, & Geneid, 2018;Waaramaa, Kukkonen, Stoltz, & Geneid, 2018), little is known about neural mechanisms underpinning poor and successful recognition of vocal emotions in hearing-impaired individuals.…”
Section: Implications For Processing Of Vocal Emotions With Hearing D...mentioning
confidence: 98%
See 2 more Smart Citations