SUMMARY The dichotomy between vocal learners and non-learners is a fundamental distinction in the study of animal communication. Male zebra finches (Taeniopygia guttata) are vocal learners that acquire a song resembling their tutors’, whereas females can only produce innate calls. The acoustic structure of short calls, produced by both males and females, is not learned. However, these calls can be precisely coordinated across individuals. To examine how birds learn to synchronize their calls, we developed a vocal robot that exchanges calls with a partner bird. Because birds answer the robot with stereotyped latencies, we could program it to disrupt each bird’s responses by producing calls that are likely to coincide with the bird’s. Within minutes, the birds learned to avoid this disruptive masking (jamming) by adjusting the timing of their responses. Notably, females exhibited greater adaptive timing plasticity than males. Further, when challenged with complex rhythms containing jamming elements, birds dynamically adjusted the timing of their calls in anticipation of jamming. Blocking the song system cortical output dramatically reduced the precision of birds’ response timing and abolished their ability to avoid jamming. Surprisingly, we observed this effect in both males and females, indicating that the female song system is functional rather than vestigial. We suggest that descending forebrain projections, including the song-production pathway, function as a general-purpose sensorimotor communication system. In the case of calls, it enables plasticity in vocal timing to facilitate social interactions, whereas in the case of songs, plasticity extends to developmental changes in vocal structure.
Prosody is an important tool of human communication, carrying both affective and pragmatic messages in speech. Prosody recognition relies on processing of acoustic cues, such as the fundamental frequency of the voice signal, and their interpretation according to acquired socioemotional scripts. Individuals with autism spectrum disorders (ASD) show deficiencies in affective prosody recognition. These deficiencies have been mostly associated with general difficulties in emotion recognition. The current study explored an additional association between affective prosody recognition in ASD and auditory perceptual abilities. Twenty high-functioning male adults with ASD and 32 typically developing male adults, matched on age and verbal abilities undertook a battery of auditory tasks. These included affective and pragmatic prosody recognition tasks, two psychoacoustic tasks (pitch direction recognition and pitch discrimination), and a facial emotion recognition task, representing nonvocal emotion recognition. Compared with controls, the ASD group demonstrated poorer performance on both vocal and facial emotion recognition, but not on pragmatic prosody recognition or on any of the psychoacoustic tasks. Both groups showed strong associations between psychoacoustic abilities and prosody recognition, both affective and pragmatic, although these were more pronounced in the ASD group. Facial emotion recognition predicted vocal emotion recognition in the ASD group only. These findings suggest that auditory perceptual abilities, alongside general emotion recognition abilities, play a significant role in affective prosody recognition in ASD.
Humans and oscine songbirds share the rare capacity for vocal learning. Songbirds have the ability to acquire songs and calls of various rhythms through imitation. In several species, birds can even coordinate the timing of their vocalizations with other individuals in duets that are synchronized with millisecond-accuracy. It is not known, however, if songbirds can perceive rhythms holistically nor if they are capable of spontaneous entrainment to complex rhythms, in a manner similar to humans. Here we review emerging evidence from studies of rhythm generation and vocal coordination across songbirds and humans. In particular, recently developed experimental methods have revealed neural mechanisms underlying the temporal structure of song and have allowed us to test birds' abilities to predict the timing of rhythmic social signals. Surprisingly, zebra finches can readily learn to anticipate the calls of a “vocal robot” partner and alter the timing of their answers to avoid jamming, even in reference to complex rhythmic patterns. This capacity resembles, to some extent, human predictive motor response to an external beat. In songbirds, this is driven, at least in part, by the forebrain song system, which controls song timing and is essential for vocal learning. Building upon previous evidence for spontaneous entrainment in human and non-human vocal learners, we propose a comparative framework for future studies aimed at identifying shared mechanism of rhythm production and perception across songbirds and humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.