2015
DOI: 10.1093/scan/nsv083
|View full text |Cite
|
Sign up to set email alerts
|

The integration of facial and vocal cues during emotional change perception: EEG markers

Abstract: The ability to detect emotional changes is of primary importance for social living. Though emotional signals are often conveyed by multiple modalities, how emotional changes in vocal and facial modalities integrate into a unified percept has yet to be directly investigated. To address this issue, we asked participants to detect emotional changes delivered by facial, vocal and facial-vocal expressions while behavioral responses and electroencephalogram were recorded. Behavioral results showed that bimodal emoti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
25
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(28 citation statements)
references
References 50 publications
(118 reference statements)
3
25
0
Order By: Relevance
“…In a study on bimodal emotion integration, P300 amplitudes were larger for audiovisual emotion stimuli than for visual emotion stimuli; the authors suggested that the bimodal stimuli led to a “dual novelty” in the cognitive task comprising visual and auditory stimuli, which enabled subjects to actively process multisensory information (Chen, Han, et al, ). Similar findings were also reported in some studies on the sensitivity of P300 to emotional face–voice stimuli (Campanella et al, ), the integration of facial and vocal emotion perception (Chen, Pan, et al, ), and emotion recognition tasks (Liu et al, ). In addition, changes in age and sex features in voices have also been shown to increase subjects' attention to and perception of stimuli (Li et al, ).…”
Section: Discussionsupporting
confidence: 88%
See 2 more Smart Citations
“…In a study on bimodal emotion integration, P300 amplitudes were larger for audiovisual emotion stimuli than for visual emotion stimuli; the authors suggested that the bimodal stimuli led to a “dual novelty” in the cognitive task comprising visual and auditory stimuli, which enabled subjects to actively process multisensory information (Chen, Han, et al, ). Similar findings were also reported in some studies on the sensitivity of P300 to emotional face–voice stimuli (Campanella et al, ), the integration of facial and vocal emotion perception (Chen, Pan, et al, ), and emotion recognition tasks (Liu et al, ). In addition, changes in age and sex features in voices have also been shown to increase subjects' attention to and perception of stimuli (Li et al, ).…”
Section: Discussionsupporting
confidence: 88%
“…In our study, the accuracy and RBR were significantly greater in the E‐AV spelling paradigm than those in the E‐V spelling paradigm at superposing one and two times; thus, the comparison of accuracy and RBR between the spelling paradigms was at superposing one and two times. From the Figure , we observed that the accuracy of the E‐AV was higher than that in 2012 (Jin et al, ) and 2016 (Chen, Pan, et al, ) at one and two superpositions and the RBR of the E‐AV was higher than that in the 2012, 2014 (Jin et al, ), and 2016 at one and two superpositions. The accuracy of the spelling paradigm is affected by several factors, such as the arrangement of the characters, visual angle (subjects spelled more accurately on a larger visual angle than on a smaller visual angle (Li, Nam, Shadden, & Johnson, ), SOA (increased SOA would result in a larger P300 amplitude to improve the classification accuracy [Lu, Speier, Hu, & Pouratian, ]), and other factors.…”
Section: Discussionmentioning
confidence: 76%
See 1 more Smart Citation
“…Subsequent research by the same group showed that supra-additive increases in the STS occurred in both congruent and incongruent conditions (albeit later in the incongruent condition), suggesting automatic integration of emotional facial and vocal expressions (Hagan et al, 2013 ). Consistent with these findings, other studies have observed theta synchronization during the integration of facial and prosodic change (Chen et al, 2015 ). Together, these findings suggest that oscillatory activity in the alpha and theta frequency bands drive the integration of facial and vocal expressions.…”
Section: Integration Of Facial Body and Vocal Expressions Of Emotiosupporting
confidence: 84%
“…In a study in which participants were presented with simultaneously presented vocal and facial expressions while being asked to detect the change of emotion from neutral to anger or happiness conveyed in voice or in face [20]. The P3 associated with the detection of the emotional categorical change in both voice and face was larger than the sum of the change in single channel (see also [21]). The N1 associated with the detection of early acoustic change was dependent on whether their atention was guided to the voice or the face, with the atention to the voice yielding to a N1 in bimodal change larger than the sum of the two single modal change conditions.…”
Section: Modulation Of Brain Responses Toward Vocal Expression By Othmentioning
confidence: 99%