Whether nonhuman primates can decouple their innate vocalizations from accompanied levels of arousal or specific events in the environment to achieve cognitive control over their vocal utterances has been a matter of debate for decades. We show that rhesus monkeys can be trained to elicit different call types on command in response to arbitrary visual cues. Furthermore, we report that a monkey learned to switch between two distinct call types from trial to trial in response to different visual cues. A controlled behavioral protocol and data analysis based on signal detection theory showed that noncognitive factors as a cause for the monkeys' vocalizations could be excluded. Our findings also suggest that monkeys also have rudimentary control over acoustic call parameters. These findings indicate that monkeys are able to volitionally initiate their vocal production and, therefore, are able to instrumentalize their vocal behavior to perform a behavioral task successfully.
Cognitive vocal control is indispensable for human language. Frontal lobe areas are involved in initiating purposeful vocalizations, but their functions remain elusive. We explored the respective roles of frontal lobe areas in initiating volitional vocalizations. Macaques were trained to vocalize in response to visual cues. Recordings from the ventrolateral prefrontal cortex (vlPFC), the anterior cingulate cortex (ACC), and the pre-supplementary motor area (preSMA) revealed single-neuron and population activity differences. Pre-vocal activity appeared first after the go cue in vlPFC, showing onset activity that was tightly linked to vocal reaction times. However, pre-vocal ACC onset activity was not indicative of call timing; instead, ramping activity reaching threshold values betrayed call onset. Neurons in preSMA showed weakest correlation with volitional call initiation and timing. These results suggest that vlPFC encodes the decision to produce volitional calls, whereas downstream ACC represents a motivational preparatory signal, followed by a general motor priming signal in preSMA.
Songbirds are renowned for their acoustically elaborate songs. However, it is unclear whether songbirds can cognitively control their vocal output. Here, we show that crows, songbirds of the corvid family, can be trained to exert control over their vocalizations. In a detection task, three male carrion crows rapidly learned to emit vocalizations in response to a visual cue with no inherent meaning (go trials) and to withhold vocalizations in response to another cue (catch trials). Two of these crows were then trained on a go/nogo task, with the cue colors reversed, in addition to being rewarded for withholding vocalizations to yet another cue (nogo trials). Vocalizations in response to the detection of the go cue were temporally precise and highly reliable in all three crows. Crows also quickly learned to withhold vocal output in nogo trials, showing that vocalizations were not produced by an anticipation of a food reward in correct trials. The results demonstrate that corvids can volitionally control the release and onset of their vocalizations, suggesting that songbird vocalizations are under cognitive control and can be decoupled from affective states.
The evolutionary origins of human language are obscured by the scarcity of essential linguistic characteristics in non-human primate communication systems. Volitional control of vocal utterances is one such indispensable feature of language. We investigated the ability of two monkeys to volitionally utter species-specific calls over many years. Both monkeys reliably vocalized on command during juvenile periods, but discontinued this controlled vocal behavior in adulthood. This emerging disability was confined to volitional vocal production, as the monkeys continued to vocalize spontaneously. In addition, they continued to use hand movements as instructed responses during adulthood. This greater vocal flexibility of monkeys early in ontogeny supports the neoteny hypothesis in human evolution. This suggests that linguistic capabilities were enabled via an expansion of the juvenile period during the development of humans.
BackgroundMice produce ultrasonic vocalizations in various inter-individual encounters and with high call rates. However, it is so far virtually unknown how these vocal patterns are generated. On the one hand, these vocal patterns could be embedded into the normal respiratory cycle, as happens in bats and other mammals that produce similar call rates and frequencies. On the other, mice could possess distinct vocal pattern generating systems that are capable of modulating the respiratory cycle, which is what happens in non-human and human primates. In the present study, we investigated the temporal call patterns of two different mammalian species, bats and mice, in order to differentiate between these two possibilities for mouse vocalizations. Our primary focus was on comparing the mechanisms for the production of rapid, successive ultrasound calls of comparable frequency ranges in the two species.ResultsWe analyzed the temporal call pattern characteristics of mice, and we compared these characteristics to those of ultrasonic echolocation calls produced by horseshoe bats. We measured the distributions of call durations, call intervals, and inter-call intervals in the two species. In the bat, and consistent with previous studies, we found that call duration was independent of corresponding call intervals, and that it was negatively correlated with the corresponding inter-call interval. This indicates that echolocation call production mechanisms in the bat are highly correlated with the respiratory cycle. In contrast, call intervals in the mouse were directly correlated with call duration. Importantly, call duration was not, or was only slightly, correlated with inter-call intervals, consistent with the idea that vocal production in the mouse is largely independent of the respiratory cycle.ConclusionsOur findings suggest that ultrasonic vocalizations in mice are produced by call-pattern generating mechanisms that seem to be similar to those that have been found in primates. This is in contrast to the production mechanisms of ultrasonic echolocation calls in horseshoe bats. These results are particularly interesting, especially since mouse vocalizations have recently attracted increased attention as potential indicators for the degree of progression of several disease patterns in mouse models for neurodegenerative and neurodevelopmental disorders of humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.