Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Human speech expresses emotional meaning not only through semantics, but also through certain attributes of the voice, such as pitch or loudness. In investigations of vocal emotion recognition, there is considerable variability in the types of stimuli and procedures used to examine their influence on emotion recognition. In addition, accurate metacognition was argued to promote correct and confident interpretations in emotion recognition tasks. Nevertheless, such associations have rarely been studied previously. We addressed this gap by examining the impact of vocal stimulus type and prosodic speech attributes on emotion recognition and a person's confidence in a given response. We analysed a total of 1038 emotional expressions according to a baseline set of 13 prosodic acoustic parameters. Results showed that these parameters provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Emotion recognition and confidence judgments were found to depend on stimulus material as they could be reliably predicted by different constellations of acoustic features. Finally, results indicated that listeners' accuracy and confidence judgements were significantly higher for affect bursts than speech-embedded stimuli and that the correct classification of emotional expressions elicited increased confidence judgements. Together, these findings show that vocal stimulus type and prosodic attributes of speech strongly influence emotion recognition and listeners' confidence in these given responses.
Human speech expresses emotional meaning not only through semantics, but also through certain attributes of the voice, such as pitch or loudness. In investigations of vocal emotion recognition, there is considerable variability in the types of stimuli and procedures used to examine their influence on emotion recognition. In addition, accurate metacognition was argued to promote correct and confident interpretations in emotion recognition tasks. Nevertheless, such associations have rarely been studied previously. We addressed this gap by examining the impact of vocal stimulus type and prosodic speech attributes on emotion recognition and a person's confidence in a given response. We analysed a total of 1038 emotional expressions according to a baseline set of 13 prosodic acoustic parameters. Results showed that these parameters provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Emotion recognition and confidence judgments were found to depend on stimulus material as they could be reliably predicted by different constellations of acoustic features. Finally, results indicated that listeners' accuracy and confidence judgements were significantly higher for affect bursts than speech-embedded stimuli and that the correct classification of emotional expressions elicited increased confidence judgements. Together, these findings show that vocal stimulus type and prosodic attributes of speech strongly influence emotion recognition and listeners' confidence in these given responses.
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.