2018
DOI: 10.1093/scan/nsy015
|View full text |Cite
|
Sign up to set email alerts
|

Prosody production networks are modulated by sensory cues and social context

Abstract: The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 64 publications
0
6
0
Order By: Relevance
“…In line with our hypothesis, the social vocal modulation conditions (hostile, likeable, intelligent) engaged the dorsal and ventral portions of the medial prefrontal cortex (mPFC), the bilateral superior temporal sulci (STS), left hippocampal formation and precuneus more strongly than the nonsocial vocal modulation condition (body size). These areas comprise the SBN (Van Overwalle 2009; Schurz et al 2014), and have been partly implicated in previous studies requiring socially meaningful voice production, during impersonations (McGettigan et al 2013;Brown et al 2019) or while volitionally modulating the voice within a social contexts (Klasen et al 2018). In the current study, we show the first evidence for engagement of social processing areas during voluntary voice change to express beneficial social traits.…”
Section: Neural Mechanisms Underlying Social Voice Modulationmentioning
confidence: 98%
See 1 more Smart Citation
“…In line with our hypothesis, the social vocal modulation conditions (hostile, likeable, intelligent) engaged the dorsal and ventral portions of the medial prefrontal cortex (mPFC), the bilateral superior temporal sulci (STS), left hippocampal formation and precuneus more strongly than the nonsocial vocal modulation condition (body size). These areas comprise the SBN (Van Overwalle 2009; Schurz et al 2014), and have been partly implicated in previous studies requiring socially meaningful voice production, during impersonations (McGettigan et al 2013;Brown et al 2019) or while volitionally modulating the voice within a social contexts (Klasen et al 2018). In the current study, we show the first evidence for engagement of social processing areas during voluntary voice change to express beneficial social traits.…”
Section: Neural Mechanisms Underlying Social Voice Modulationmentioning
confidence: 98%
“…The expression of such affective vocalizations has been proposed to rely on the interaction of a dual-pathway system consisting of the neocortical regions of the VMN and a phylogenetically older network of subcortical brain structures such as the basal ganglia and the amygdala (Ackermann et al 2014;Hage and Nieder 2016). In line with this, voluntary affective vocal expression engages both vocomotor areas related to volitional expression as well as areas related to processing affect, such as the IFG, BG, ACC and STC and Amygdala (Barrett et al 2004;Aziz-Zadeh et al 2010;Laukka et al 2011;Pichon and Kell 2013;Frühholz et al 2015;Klaas et al 2015;Belyk and Brown 2016;Mitchell et al 2016;Klasen et al 2018). This interplay of affect processing streams and the vocomotor network therefore suggests that some informational integration, is necessary to achieve the successful expression of affect in the voice.…”
mentioning
confidence: 99%
“…Current models of auditory emotion processing highlight a right‐hemispheric lateralization (Brück, Kreifelts, & Wildgruber, 2011; Klasen et al, 2018). Right sided primary and higher order acoustic regions extract suprasegmental information, followed by processing of meaningful suprasegmental sequences in posterior parts of the right STS, followed by evaluation of emotional prosody in IFG (Wildgruber, Ackermann, Kreifelts, & Ethofer, 2006).…”
Section: Discussionmentioning
confidence: 99%
“…Right sided primary and higher order acoustic regions extract suprasegmental information, followed by processing of meaningful suprasegmental sequences in posterior parts of the right STS, followed by evaluation of emotional prosody in IFG (Wildgruber, Ackermann, Kreifelts, & Ethofer, 2006). Neuroimaging findings (Klasen et al, 2018) highlight the relevance of right IFG for emotional prosody. In our study, a right‐hemispheric lateralization was observed for the fourfold conjunction of all maps (Figure 3c), showing specific effects of emotion evaluation independently from modality and congruency of target concepts.…”
Section: Discussionmentioning
confidence: 99%
“…In this perspective, authors have often recorded preschoolers during the ADOS which offers a standardized setting for social interaction 32,33 . Although social interaction recordings provide ecologically valid prosody compared to reading texts or naming pictures 11,34,35 , they contain noise and adults' voices requiring to be manually removed before analyzing participants' prosody. A promising alternative to this time-consuming manual preprocessing relies on applying diarization algorithms, i.e.…”
Section: Introductionmentioning
confidence: 99%