2021
DOI: 10.1177/1754073920954288
|View full text |Cite
|
Sign up to set email alerts
|

Comment: The Next Frontier: Prosody Research Gets Interpersonal

Abstract: Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 51 publications
1
8
0
Order By: Relevance
“…Recognition of emotional utterances increased as stimulus intensity increased [ 8 , 62 ], suggesting that discrete (posed) vocal emotions are more clearly differentiated when they are expressed at high intensity levels [ 40 ]. As expected, vocal expressions of negative emotions achieved systematically higher recognition levels for both cultural groups despite there being only one positive emotion under study [ 4 , 22 , 26 ]. Interestingly, while happiness was generally associated with low recognition across languages and stimulus types, this difference was less evident for the Chinese participants when judging stimuli in their native language, Mandarin.…”
Section: Discussionsupporting
confidence: 54%
See 1 more Smart Citation
“…Recognition of emotional utterances increased as stimulus intensity increased [ 8 , 62 ], suggesting that discrete (posed) vocal emotions are more clearly differentiated when they are expressed at high intensity levels [ 40 ]. As expected, vocal expressions of negative emotions achieved systematically higher recognition levels for both cultural groups despite there being only one positive emotion under study [ 4 , 22 , 26 ]. Interestingly, while happiness was generally associated with low recognition across languages and stimulus types, this difference was less evident for the Chinese participants when judging stimuli in their native language, Mandarin.…”
Section: Discussionsupporting
confidence: 54%
“…While these and other studies [ 18 21 ] exemplify that cultural knowledge modulates emotion perception for faces , research on the voice is far less advanced. Human speech contains both the linguistic message and acoustically rich vocal signals that people utilize to exchange social information and to share emotions [ 22 ]. As people produce an utterance, acoustic features (e.g., changes in pitch, loudness, and duration of speech elements) dynamically reveal the speaker’s emotional state over time [ 6 , 23 ], promoting pancultural recognition of most basic emotions in the voice [ 1 3 , 24 ].…”
Section: Introductionmentioning
confidence: 99%
“…Prosody also appears to be a contextualizing marker of verbal interactions that directly leads listeners to the speaker’s emotional message ( House, 2007 ). The critical role prosody plays in interpersonal and social situations ( Pell and Kotz, 2021 ) may be generated by its perceptual salience, or may lead to heightened sensitivity to prosodic cues in noise.…”
Section: Discussionmentioning
confidence: 99%
“…Despite pitch and loudness were both essential to the encoding of socio-pragmatic meanings ( Jiang and Pell, 2017 ; Caballero et al, 2018 ; Pell and Kotz, 2021 ), they seemed to act in concert with the lexical tone to form complex interactive patterns when encoding speaker confidence. A previous study ( Zhang et al, 2021 ) on weighting patterns of different acoustic parameters in encoding prominence in four mandarin tones showed that, on the syllable of flat tones, the mean, maximal and minimal pitch contributed more for marking prominent syllables than mean intensity; while on the syllable of contour tones, the mean intensity and intensity variation weighed higher than pitch-related features.…”
Section: Discussionmentioning
confidence: 99%