2013
DOI: 10.3389/fpsyg.2013.00353
|View full text |Cite
|
Sign up to set email alerts
|

Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations

Abstract: Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

9
107
1
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 108 publications
(118 citation statements)
references
References 45 publications
9
107
1
1
Order By: Relevance
“…When directly compared (which is rare), listeners tend to be superior in labeling vocalizations over prosody, especially for anger, joy, sadness, disgust and fear (Hawk et al, 2009). Nonetheless, cross-cultural studies that have presented emotional prosody (Pell, Paulmann, Dara, Alasseri, & Kotz, 2009;Scherer, Banse, & Wallbott, 2001; Thompson & Balkwill, 2006) or vocalizations (Laukka et al, 2013;Sauter & Eimer, 2009;Sauter & Scott, 2007) argue that each type of vocal expression possesses a 'universal' set of acoustic features that uniquely refers to different basic emotions, which predict how listeners assign meaning to these signals. Recent data on speech-embedded emotions are also beginning to reveal the time course of emotional prosody recognition based on behavioral judgments of stimuli gated into different stimulus durations; this work shows that acoustic patterns in speech differentiate rapidly to reveal basic emotional meanings to listeners after hearing approximately 400-800ms of acoustic information (Cornew, Carver, & Love, 2010;Jiang, Paulmann, Robin, & Pell, in press;Pell & Kotz, 2011;Rigoulot, Wassiliwizky, & Pell, 2013).…”
Section: Non-linguistic Vocalizations Versus Speech-embedded Emotionsmentioning
confidence: 99%
“…When directly compared (which is rare), listeners tend to be superior in labeling vocalizations over prosody, especially for anger, joy, sadness, disgust and fear (Hawk et al, 2009). Nonetheless, cross-cultural studies that have presented emotional prosody (Pell, Paulmann, Dara, Alasseri, & Kotz, 2009;Scherer, Banse, & Wallbott, 2001; Thompson & Balkwill, 2006) or vocalizations (Laukka et al, 2013;Sauter & Eimer, 2009;Sauter & Scott, 2007) argue that each type of vocal expression possesses a 'universal' set of acoustic features that uniquely refers to different basic emotions, which predict how listeners assign meaning to these signals. Recent data on speech-embedded emotions are also beginning to reveal the time course of emotional prosody recognition based on behavioral judgments of stimuli gated into different stimulus durations; this work shows that acoustic patterns in speech differentiate rapidly to reveal basic emotional meanings to listeners after hearing approximately 400-800ms of acoustic information (Cornew, Carver, & Love, 2010;Jiang, Paulmann, Robin, & Pell, in press;Pell & Kotz, 2011;Rigoulot, Wassiliwizky, & Pell, 2013).…”
Section: Non-linguistic Vocalizations Versus Speech-embedded Emotionsmentioning
confidence: 99%
“…The parallelism of human emotion expression in speech and music has been demonstrated by a comprehensive review of empirical studies on patterns of acoustic parameters in these two forms of human affect communication (Juslin and Laukka, 2003). The assumption of powerful "affect primitives" in speech and language is also supported by research on the recognition of emotion in speech (Bryant and Barrett, 2008;Laukka et al, 2013b;Pell et al, 2009;Sauter et al, 2010;Scherer et al, 2001) and music (Laukka et al, 2013a). This research has generally shown the existence of both a fairly high degree of universality of the underlying expression and recognition mechanisms and of sizeable differences between cultures, especially for self-reflective, social, and moral emotions.…”
Section: Introductionmentioning
confidence: 97%
“…In consequence, we expect that emotion expression is similar in speaking and singing voice -because of the evolutionary origin of the expression mechanisms and the need for authenticity (Maynard Smith and Harper, 2003;Mortillaro et al, 2013). Obviously, there may well be important differences across languages and cultures due in large part to language characteristics such as phonemic structure or intonation rules.…”
Section: Introductionmentioning
confidence: 99%
“…Listeners can reliably recognize a broad range of vocally expressed emotions, even when the spoken words are unrelated to the emotion (Fairbanks and Pronovost, 1938;Belin et al, 2008;SimonThomas et al, 2009) or when recordings are filtered to remove segmental content (Lieberman and Michaels, 1962). Unlike the words that make up the segmental aspect of speech, affective vocalizations can be recognized across languages (Laukka et al, 2013), between cultures that have had only minimal historical contact (Sauter et al, 2010)-although with some cultural variation (Scherer and Wallbott, 1994)-and across species (Faragó et al, 2014). Indeed, infants who are hearing-impaired produce affective vocalizations that are acoustically similar to those of normal-hearing infants (Scheiner et al, 2004(Scheiner et al, , 2006.…”
Section: Introductionmentioning
confidence: 99%