Functional Near-Infrared spectroscopy (fNIRS) is a neuroimaging tool that has been recently used in a variety of cognitive paradigms. Yet, it remains unclear whether fNIRS is suitable to study complex cognitive processes such as categorization or discrimination. Previously, functional imaging has suggested a role of both inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we extended paradigms used in functional magnetic resonance imaging (fMRI) to investigate the suitability of fNIRS to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination using mini-blocks and event-related stimuli. Participants heard speechlike but semantically meaningless pseudowords spoken in various tones and evaluated them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize; and processed the linguistic faster than the emotional content of stimuli. Interactions between condition (emotion/word), task (discrimination/categorization) and emotion content (anger, fear, neutral) influenced accuracy and reaction time. At the brain level, we found a modulation of the Oxy-Hb changes in IFG depending on condition, task, emotion and hemisphere (right or left), highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation, fostering its application to complex cognitive paradigms.
The present paper explores the benefits and the capabilities of various emerging state-of-the-art interactive 3D and Internet of Things technologies and investigates how these technologies can be exploited to develop a more effective technology supported exposure therapy solution for social anxiety disorder. “DJINNI” is a conceptual design of an in vivo augmented reality (AR) exposure therapy mobile support system that exploits several capturing technologies and integrates the patient’s state and situation by vision-based, audio-based, and physiology-based analysis as well as by indoor/outdoor localization techniques. DJINNI also comprises an innovative virtual reality exposure therapy system that is adaptive and customizable to the demands of the in vivo experience and therapeutic progress. DJINNI follows a gamification approach where rewards and achievements are utilized to motivate the patient to progress in her/his treatment. The current paper reviews the state of the art of technologies needed for such a solution and recommends how these technologies could be integrated in the development of an individually tailored and yet feasible and effective AR/virtual reality-based exposure therapy. Finally, the paper outlines how DJINNI could be part of classical cognitive behavioral treatment and how to validate such a setup.
Variations of the vocal tone of the voice during speech production, known as prosody, provide information about the emotional state of the speaker. In recent years, functional imaging has suggested a role of both right and left inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we investigated the suitability of functional Near-Infrared Spectroscopy (fNIRS) to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination. Participants listened to speech-like but semantically meaningless words spoken in a neutral, angry or fearful tone and had to categorize or discriminate them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize and they processed the linguistic content of stimuli faster than their emotional content, while an interaction between condition (emotion/word) and task (discrimination/categorization) influenced accuracy. At the brain level, we found a four-way interaction in the fNIRS signal between condition, task, emotion and channel, highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation in humans, fostering its application to study emotional appraisal.
Humans are adept in extracting affective information from the vocalisations of not only humans but also other animals. Current research has mainly focused on phylogenetic proximity to explain such cross-species emotion recognition abilities. However, because research protocols are inconsistent across studies, it remains unclear whether human recognition of vocal affective cues of other species is due to cross-taxa similarities between acoustic parameters, the phylogenetic distances between species, or a combination of both. To address this, we first analysed acoustic variation in 96 affective vocalizations, including agonistic and affiliative contexts, of humans and three other primate species : rhesus macaques, chimpanzees and bonobos; the latter two being equally phylogenetically distant from humans. Using Mahalanobis distances, we found that chimpanzee vocalizations were acoustically closer to those of humans than to those of bonobos, confirming a potential derived vocal evolution in the bonobo lineage. Second, we investigated whether human participants recognized the affective basis of vocalisations through tasks by asking them to categorize (A vs B) or discriminate (A vs non-A) vocalisations based on their affective content. Results showed that participants could reliably categorize and discriminate most of the affective vocal cues expressed by other primates, except threat calls by bonobos and macaques. Overall, participants showed greatest accuracy in detecting chimpanzee vocalizations; but not bonobo vocalizations, which provides support for both the phylogenetic proximity and acoustic similarity hypotheses. Our results highlight for the first time the importance of both phylogenetic and acoustic parameter level explanations in cross-species affective perception, drawing a more complex picture to explain our natural understanding of animal signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.