2014
DOI: 10.3389/fnbeh.2014.00422
|View full text |Cite
|
Sign up to set email alerts
|

Discrimination of fearful and angry emotional voices in sleeping human neonates: a study of the mismatch brain responses

Abstract: Appropriate processing of human voices with different threat-related emotions is of evolutionarily adaptive value for the survival of individuals. Nevertheless, it is still not clear whether the sensitivity to threat-related information is present at birth. Using an odd-ball paradigm, the current study investigated the neural correlates underlying automatic processing of emotional voices of fear and anger in sleeping neonates. Event-related potential data showed that the fronto-central scalp distribution of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
25
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 83 publications
3
25
0
Order By: Relevance
“…Los Angeles, CA), which consisted of 16 LED emitters (intensity = 5 mW/wavelength) and 16 detectors at two wavelengths (760 and 850 nm). Based on previous studies in infants (e.g., Benavides-Varela, Gómez, & Mehler, 2011;Cheng et al, 2012;Minagawa-Kawai et al, 2011;Saito et al, 2007;Sato et al, 2012;Taga & Asakawa, 2007;Zhang et al, 2014) and adults (e.g., Brück et al, 2011;Frühholz, Trost, & Kotz, 2016), we placed the optodes over temporal, frontal, and central regions of the brain, using a NIRS-EEG compatible cap of 32 cm diameter (EASYCAP, Herrsching, Germany) in accordance with the international 10/10 system. There were 48 useful channels (24 per hemisphere), where source and detector were at a mean distance of 2.5 cm ( Figure 2; see also Altvater-Mackensen & Grossmann, 2016;Bennett, Bolling, Anderson, Pelphrey, & Kaiser, 2014;Obrig et al, 2017;Quaresima, Bisconti, & Ferrari, 2012;Telkemeyer et al, 2009).…”
Section: Nirs Data Recordingmentioning
confidence: 99%
“…Los Angeles, CA), which consisted of 16 LED emitters (intensity = 5 mW/wavelength) and 16 detectors at two wavelengths (760 and 850 nm). Based on previous studies in infants (e.g., Benavides-Varela, Gómez, & Mehler, 2011;Cheng et al, 2012;Minagawa-Kawai et al, 2011;Saito et al, 2007;Sato et al, 2012;Taga & Asakawa, 2007;Zhang et al, 2014) and adults (e.g., Brück et al, 2011;Frühholz, Trost, & Kotz, 2016), we placed the optodes over temporal, frontal, and central regions of the brain, using a NIRS-EEG compatible cap of 32 cm diameter (EASYCAP, Herrsching, Germany) in accordance with the international 10/10 system. There were 48 useful channels (24 per hemisphere), where source and detector were at a mean distance of 2.5 cm ( Figure 2; see also Altvater-Mackensen & Grossmann, 2016;Bennett, Bolling, Anderson, Pelphrey, & Kaiser, 2014;Obrig et al, 2017;Quaresima, Bisconti, & Ferrari, 2012;Telkemeyer et al, 2009).…”
Section: Nirs Data Recordingmentioning
confidence: 99%
“…Although studies exist to suggest that socially salient auditory information, including emotionally loaded human vocalizations, modulate infant neural responses, the findings have been mixed. The infant brain seems to differentiate between emotional prosody embedded in speech soon after birth (Cheng, Lee, Chen, Wang & Decety, 2012;Zhang et al, 2014), probably relying on automatic discrimination processes related to the activity of primary and non-primary auditory areas in the temporal cortex (Näätänen, Paavilainen, Rinne, & Alho, 2007). While there is limited evidence to make such claims, auditory processing of emotion prosody in infancy seems to resemble adult-like processing demonstrating sensitivity to emotional content both at early processing stages (Grossmann et al, 2013) and at later ones (Grossmann, Striano, & Friederici, 2005).…”
Section: Introductionmentioning
confidence: 99%
“…Crucially, all these nonverbal vocalizations were produced by infants. On the basis of previous studies on emotion perception from voice in both infants (Cheng et al, 2012;Grossman et al, 2005;Missana et al, 2017;Zhang et al, 2014) and adults (Jessen & Kotz, 2011;Liu et al, 2012;Paulmann et al, 2013;Pell et al, 2015;Schirmer et al, 2005), we examined differences between affective and neutral auditory stimuli at the level of the early ERP components, in particular those corresponding to the N100 and the P200. Given the sensitivity of the N100 and P200 amplitude to emotional information (e.g., Pell et al, 2015;Paulmann et al, 2013;Missana et al, 2017), we hypothesized that emotional nonverbal vocalizations would evoke larger N100 and P200 amplitudes relative to neutral vocalizations.…”
Section: Introductionmentioning
confidence: 99%
“…For example, they actively respond to auditory and visual stimuli while asleep (Cheour et al, 2002; Cruz, Crego, Ribeiro, Goncalves, & Sampaio, 2015; deRegnier, Nelson, Thomas, Wewerka, & Georgieff, 2000; Kotilahti et al, 2010; Sambeth, Ruohio, Alku, Fellman, & Huotilainen, 2008). Event related potential (ERP) studies show a differential response to mother’s voice versus a stranger’s voice (deRegnier et al, 2000) and a mis-match response to fearful versus angry voices (Zhang et al, 2014). Near-infrared spectroscopy (NIRS) has been used to demonstrate hemodynamic responses to speech and music in sleeping newborns (Kotilahti et al, 2010).…”
Section: | Introductionmentioning
confidence: 99%
“…Rodents in a classical conditioning paradigm during REM sleep were able to learn a conditioned response (Hennevin, Hars, Maho, & Bloch, 1995) though no learning appeared to occur during non-REM sleep. Using a mismatch negativity paradigm, differential reactivity to fearful versus angry voices was observed during active sleep in newborns (Zhang et al, 2014). Recently, Barnes and Wilson (2014) showed that olfactory memories could be enhanced or disrupted in rodents and that the effect was confined to slow wave sleep.…”
Section: | Introductionmentioning
confidence: 99%