Laughter is often considered to be the product of humour. However, laughter is a social emotion, occurring most often in interactions, where it is associated with bonding, agreement, affection, and emotional regulation. Laughter is underpinned by complex neural systems, allowing it to be used flexibly. In humans and chimpanzees, social (voluntary) laughter is distinctly different from evoked (involuntary) laughter, a distinction which is also seen in brain imaging studies of laughter.
Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted Spontaneous, authentic laughter with Volitional, fake laughter, finding that Spontaneous laughter was higher in pitch, longer in duration, and had different spectral characteristics from Volitional laughter that was produced under full voluntary control. In a behavioural experiment, listeners perceived Spontaneous and Volitional laughter as distinct in arousal, valence, and authenticity. Multiple regression analyses further revealed that acoustic measures could significantly predict these affective and authenticity judgements, with the notable exception of authenticity ratings for Spontaneous laughter. The combination of acoustic predictors differed according to the laughter type, where Volitional laughter ratings were uniquely predicted by harmonics-to-noise ratio (HNR). To better understand the role of HNR in terms of the physiological effects in vocal tract configuration as a function of authenticity during laughter production, we ran an additional experiment in which phonetically trained listeners rated each laugh for breathiness, nasality, and mouth opening. Volitional laughter was found to be significantly more nasal than Spontaneous laughter, and the item-wise physiological ratings also significantly predicted affective judgements obtained in the first experiment. Our findings suggest that as an alternative to traditional acoustic measures, ratings of phonatory and articulatory features can be useful descriptors of the acoustic qualities of non-verbal emotional vocalizations, and of their perceptual implications.
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within-person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only 'tell people apart' (perceiving exemplars from two different speakers as separate identities) but also 'tell people together' (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within-person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in 'telling people together'. Our study highlights within-person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re-evaluation of theoretical models to account for natural variability during identity perception.
In 2 behavioral experiments, we explored how the extraction of identity-related information from familiar and unfamiliar voices is affected by naturally occurring vocal flexibility and variability, introduced by different types of vocalizations and levels of volitional control during production. In a first experiment, participants performed a speaker discrimination task on vowels, volitional (acted) laughter, and spontaneous (authentic) laughter from 5 unfamiliar speakers. We found that performance was significantly impaired for spontaneous laughter, a vocalization produced under reduced volitional control. We additionally found that the detection of identity-related information fails to generalize across different types of nonverbal vocalizations (e.g., laughter vs. vowels) and across mismatches in volitional control within vocalization pairs (e.g., volitional laughter vs. spontaneous laughter), with performance levels indicating an inability to discriminate between speakers. In a second experiment, we explored whether personal familiarity with the speakers would afford greater accuracy and better generalization of identity perception. Using new stimuli, we largely replicated our previous findings: whereas familiarity afforded a consistent performance advantage for speaker discriminations, the experimental manipulations impaired performance to similar extents for familiar and unfamiliar listener groups. We discuss our findings with reference to prototype-based models of voice processing and suggest potential underlying mechanisms and representations of familiar and unfamiliar voice perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.