2021
DOI: 10.1016/j.cub.2021.01.102
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual integration in macaque face patch neurons

Abstract: Highlights d Audiovisual integration was examined among neurons in macaque AF and AM face patches d Most neurons in AF were modulated by the acoustic component of macaque vocalizations d Acoustic modulation in AF was contingent on visual facial structure d Very few neurons in AM exhibited auditory responses or modulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
3

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(39 citation statements)
references
References 62 publications
1
35
3
Order By: Relevance
“…Scanning optogenetic inactivation indicated that MOs was the only region of dorsal cortex required to respond to both modalities, and electrode recordings indicated that MOs was the only region of frontal cortex to encode information about both modalities as well as choices. This might appear to contradict previous work implicating parietal cortex in multisensory integration 3,5,6,14,[24][25][26][27][28][29][30][31][32] , or showing multisensory activity in primary sensory cortices [36][37][38][39][40][41][42][43][44][45] . However, our finding agrees with evidence that parietal cortex can reflect multisensory activity without being causally involved in a task 24,33,46,47 .…”
Section: Discussioncontrasting
confidence: 76%
See 1 more Smart Citation
“…Scanning optogenetic inactivation indicated that MOs was the only region of dorsal cortex required to respond to both modalities, and electrode recordings indicated that MOs was the only region of frontal cortex to encode information about both modalities as well as choices. This might appear to contradict previous work implicating parietal cortex in multisensory integration 3,5,6,14,[24][25][26][27][28][29][30][31][32] , or showing multisensory activity in primary sensory cortices [36][37][38][39][40][41][42][43][44][45] . However, our finding agrees with evidence that parietal cortex can reflect multisensory activity without being causally involved in a task 24,33,46,47 .…”
Section: Discussioncontrasting
confidence: 76%
“…To conclude a brain region is involved in multisensory integration, it must not only be shown to contain neurons encoding information from both sensory modalities, but also to have a causal role in behavioural responses to both modalities, alone or in combination. In rodents and other mammals, including humans, several brain regions appear to encode multiple modalities, including superior colliculus [15][16][17][18][19] , thalamus [20][21][22][23] , parietal cortex 3,5,6,14,[24][25][26][27][28][29][30][31][32] , frontal cortex [33][34][35] , and even primary sensory cortices [36][37][38][39][40][41][42][43][44][45] . However, the causal role of these regions in multisensory decisions remains unclear.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, videos can be paired with auditory stimuli to provide multisensory information to better mimic real-life social interaction. A recent study presenting videos of monkeys together with vocalization revealed that the auditory component modulated face-selective single-neuron activity in a specific face patch (Khandhadia et al, 2021), supporting the idea that social stimuli with unisensory versus multisensory information engage the brain in different ways.…”
Section: Level 1: Static Social Stimulimentioning
confidence: 81%
“…To probe this hypothesis, we apply the less stringent multisensory integration criteria used in fMRI studies, namely we test for audio-visual responses statistically higher (or lower) than each of the uni-sensory conditions (Beauchamp, 2005;Gentile et al, 2010;Pollick et al, 2011;Tyll et al, 2013;Werner & Noppeney, 2010). Although face-voice integration has been described in the auditory cortex (CL, CM, in awake and anesthetized monkeys; A1 only in awake monkeys) and the STS (Ghazanfar et al, 2008;Perrodin et al, 2015), and to a lesser extent in speci c face-patches (Khandhadia et al, 2021), here, enhancement of the audio-visual response can only be seen in the blocked conditions involving visual scenes. The parsimonious interpretation of these observations is that face-vocalization binding was easier than scene-vocalization binding, thereby resulting in enhanced integrative processes, speci cally in this latter condition, in agreement with the fact that neuronal multisensory integration is more pronounced for low saliency stimuli.…”
Section: Audio-visual Association Based On Meaning and Multisensory Integrationmentioning
confidence: 99%
“…Face, voice, and social scene processing in monkeys have been individually explored, to some extent, from the behavioural (Gothard et al, 2004(Gothard et al, , 2009Rendall et al, 1996;Sliwa et al, 2011) and the neuronal point of view (Aparicio et al, 2016;Arcaro et al, 2017;Cohen et al, 2007;Eifuku, 2014;Gil-da-Costa et al, 2004, 2006Hesse & Tsao, 2020;Issa & DiCarlo, 2012;Joly, Pallier, et al, 2012;Moeller et al, 2008;Ortiz-Rios et al, 2015;Petkov et al, 2008;Pinsk et al, 2005Pinsk et al, , 2009Poremba et al, 2003Poremba et al, , 2004Romanski et al, 2005;Russ et al, 2008;Schwiedrzik et al, 2015;Sliwa & Freiwald, 2017;Tsao et al, 2003). Audiovisual integration during naturalistic social stimuli has recently been shown in speci c regions of the monkey face-patch system (Khandhadia et al, 2021), the voice-patch system (Ghazanfar, 2009;Ghazanfar et al, 2005;Perrodin et al, 2014Perrodin et al, , 2015, as well as in the prefrontal voice area (Romanski, 2012). However, beyond combining sensory information, social perception also involves integrating contextual, behavioural and emotional information (Freiwald, 2020;Ghazanfar & Santos, 2004).…”
Section: Introductionmentioning
confidence: 99%