Variations of the vocal tone of the voice during speech production, known as prosody, provide information about the emotional state of the speaker. In recent years, functional imaging has suggested a role of both right and left inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we investigated the suitability of functional Near-Infrared Spectroscopy (fNIRS) to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination. Participants listened to speech-like but semantically meaningless words spoken in a neutral, angry or fearful tone and had to categorize or discriminate them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize and they processed the linguistic content of stimuli faster than their emotional content, while an interaction between condition (emotion/word) and task (discrimination/categorization) influenced accuracy. At the brain level, we found a four-way interaction in the fNIRS signal between condition, task, emotion and channel, highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation in humans, fostering its application to study emotional appraisal.
The central nervous system has developed specialized neural systems to process relevant information, including emotional information in the auditory domain. This chapter discusses the functional roles of temporal regions like the superior temporal sulcus (STS) and gyrus (STG), the amygdala and subcortical grey nuclei as well as regions in the frontal lobe like the orbitofrontal cortex (OFC) and inferior frontal gyri (IFG) during the processing emotional prosody. The involvement of these different regions in the processing of the different steps of auditory information processing however is still unclear. A model is proposed based on results of functional magnetic resonance imaging (fMRI) studies and studies using electroencephalographic recordings (EEG) as well as intracranial local field potentials (LFPs). The functional coupling between different brain areas, such as the STS, the IFG, the amygdala, and OFC regions, will be discussed in the light of recent empirical findings.
Until recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia ̶ especially between the putamen and external globus pallidus ̶ and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.