2019
DOI: 10.1037/amp0000399
|View full text |Cite
|
Sign up to set email alerts
|

Mapping 24 emotions conveyed by brief human vocalization.

Abstract: Emotional vocalizations are central to human social life. Recent studies have documented that people recognize at least 13 emotions in brief vocalizations. This capacity emerges early in development, is preserved in some form across cultures, and informs how people respond emotionally to music. What is poorly understood is how emotion recognition from vocalization is structured within what we call a semantic space, the study of which addresses questions critical to the field: How many distinct kinds of emotion… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
107
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 145 publications
(116 citation statements)
references
References 74 publications
8
107
0
1
Order By: Relevance
“…Both Dutch and Chinese nonverbal vocalisations were highly effective means of communicating 11 different positive emotions. These results are in line with previous research showing that amusement, interest, lust, relief and surprise are well-recognised from nonverbal vocalisations (e.g., Cowen 2019;Cordaro et al, 2016;Laukka et al, 2013). In addition to these emotions, the current investigation showed that nonverbal vocalisations can reliably communicate admiration, determination, excitement, inspiration, schadenfreude, and sensory pleasure.…”
Section: Discussionsupporting
confidence: 93%
“…Both Dutch and Chinese nonverbal vocalisations were highly effective means of communicating 11 different positive emotions. These results are in line with previous research showing that amusement, interest, lust, relief and surprise are well-recognised from nonverbal vocalisations (e.g., Cowen 2019;Cordaro et al, 2016;Laukka et al, 2013). In addition to these emotions, the current investigation showed that nonverbal vocalisations can reliably communicate admiration, determination, excitement, inspiration, schadenfreude, and sensory pleasure.…”
Section: Discussionsupporting
confidence: 93%
“…Humans communicate emotions with the voice through prosody and vocal bursts (Cowen et al, 2019b). Research has long claimed that certain acoustic features, such as pitch, loudness, tempo or quality and their related parameters (e.g., fundamental frequency, jitter, shimmer, harmonics-to-noise ratio) drive the recognition of emotions from prosody and vocal bursts (e.g., Sauter et al, 2010;Scherer and Baenziger, 2004;Banse and Scherer, 1996).…”
Section: Study 1: Performance Accuracy By Classification Algorithmsmentioning
confidence: 99%
“…An all-encompassing term for such vocal qualities of speech is prosody (i.e., tone of voice). Research has shown that prosody may support the correct interpretations of utterances independently of linguistic comprehension (Paulmann, 2016;Thompson and Balkwill, 2009;Kitayama and Ishii, 2002), with studies reporting recognition rates for emotions to be significantly higher than chance (Cowen et al, 2019a(Cowen et al, , 2019bLausen and Schacht, 2018;Cordaro et al, 2016;Paulmann and Uskul, 2014;Juergens et al, 2013;Scherer et al, 2001). In addition, metacognition, the ability to actively monitor and reflect upon one's own performance, has been argued to impact judgements of accuracy in emotion recognition tasks (Begue et al, 2019;Kelly and Metcalfe, 2011;Dunlosky and Metcalfe, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…For the label-learning phase, we assume that labellers are able to estimate multiple sets of nA emotional attributes empirically for each of c unseen emotional states. We take this assumption as the first-hand learning for emotions that usually comes from observing physiological or behavioural data [24], indicating that it is natural for different human beings to vocally express one emotional state in various ways [16,17].…”
Section: Label Learningmentioning
confidence: 99%
“…Still, very few of the existing ZSL methods provide a reasonable framework for zero-shot emotion learning in speech. This is due, in part, to the latent emotional descriptors in paralinguistics [6,15] and complicated forms of expression of emotion [16,17].…”
Section: Introductionmentioning
confidence: 99%