2019
DOI: 10.1101/526558
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Human discrimination and categorization of emotions in voices: a functional Near-Infrared Spectroscopy (fNIRS) study

Abstract: Variations of the vocal tone of the voice during speech production, known as prosody, provide information about the emotional state of the speaker. In recent years, functional imaging has suggested a role of both right and left inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we investigated the suitability of functional Near-Infrared Spectroscopy (fNIRS) to study frontal lateralization of human emotion vocalization processing during expli… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…We thank the SNSF for supporting the National Center of Competence in Affective Sciences (NCCR Grant 51NF40-104897 to DG) hosted by the Swiss Center for Affective Sciences. This manuscript has been released as a Pre-Print at bioRxiv (Gruber et al, 2019). We also thank two reviewers for their valuable comments on a previous version of this manuscript and two additional reviewers for their comments that led to the present version of the manuscript.…”
Section: Data Availability Statementmentioning
confidence: 99%
“…We thank the SNSF for supporting the National Center of Competence in Affective Sciences (NCCR Grant 51NF40-104897 to DG) hosted by the Swiss Center for Affective Sciences. This manuscript has been released as a Pre-Print at bioRxiv (Gruber et al, 2019). We also thank two reviewers for their valuable comments on a previous version of this manuscript and two additional reviewers for their comments that led to the present version of the manuscript.…”
Section: Data Availability Statementmentioning
confidence: 99%
“…(2019) provided participants with a series of semantically meaningless words spoken with various inflections and required the participants to either categorise or discriminate them based on their emotional or linguistic context. The findings revealed that, compared to a neutral condition, O 2 Hb concentrations decreased in the frontal left hemisphere during categorisation and discrimination of emotional tones, especially in the fear condition ( Gruber et al. , 2019 ).…”
Section: Discussionmentioning
confidence: 92%
“…Some studies presented obvious stimuli over a prolonged period, while many other studies elected for a rapid presentation of subtle expressions of emotion. One study by Gruber et al. (2019) provided participants with a series of semantically meaningless words spoken with various inflections and required the participants to either categorise or discriminate them based on their emotional or linguistic context.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As summarised by Frühholz, Trost, et al (2016), fMRI studies reveal that emotions conveyed in speech elicit cortical haemodynamic activity bilaterally in STC Witteman et al, 2012), andIFG (Leitman et al, 2010;Witteman et al, 2012). With fNIRS, D. found emotion-evoked activation in bilateral studies do not include both STG (Anuardi & Yamazaki, 2019;Gruber et al, 2020) or do not report the simple effect of hearing vocal emotions on cortical activation (Sonkaya & reported to evoke greater activity in the right, relative to left, hemisphere (Beaucousin et al, 2007;Kotz et al, 2006;Kreitewolf et al, 2014;Seydell-Greenwald et al, 2020;88 von Cramon et al, 2003;Witteman et al, 2012). This right-hemisphere bias, which also Zhen et al, 2021), is commonly attributed to a right-hemispheric specialisation for decoding spectral information such as contours of the F0, the fundamental frequency or voice pitch, that unfold over relatively long timescales.…”
Section: Introductionmentioning
confidence: 99%