2017
DOI: 10.1523/eneuro.0113-17.2017
|View full text |Cite
|
Sign up to set email alerts
|

Shape Selectivity of Middle Superior Temporal Sulcus Body Patch Neurons

Abstract: Functional MRI studies in primates have demonstrated cortical regions that are strongly activated by visual images of bodies. The presence of such body patches in macaques allows characterization of the stimulus selectivity of their single neurons. Middle superior temporal sulcus body (MSB) patch neurons showed similar stimulus selectivity for natural, shaded, and textured images compared with their silhouettes, suggesting that shape is an important determinant of MSB responses. Here, we examined and modeled t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
26
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
3

Relationship

3
6

Authors

Journals

citations
Cited by 27 publications
(30 citation statements)
references
References 37 publications
3
26
0
Order By: Relevance
“…Recently, a family of computational models has emerged in the form of convolutional deep neural networks (DNNs) that allow to simulate this hierarchical information processing. When trained on object recognition, DNNs show interesting commonalities with the primate ventral stream, with a progression of representations that is surprisingly similar to what is seen in monkeys and humans for brief stimulus presentations (Cadieu et al, 2014; Yamins et al, 2014; Güçlü and van Gerven, 2015; Kalfas et al, 2017, 2018; Pospisil et al, 2018; Bashivan et al, 2019), as such capturing important aspects of object recognition and perceived shape similarity (Yamins et al, 2014; Kubilius et al, 2016; Kalfas et al, 2018). The architecture of these computational models is composed of series of convolutional layers that perform local filtering operations, followed by fully connected layers, which gradually transforms pixel level inputs into a high-level representational space where object categories are linearly separable.…”
Section: Introductionmentioning
confidence: 77%
“…Recently, a family of computational models has emerged in the form of convolutional deep neural networks (DNNs) that allow to simulate this hierarchical information processing. When trained on object recognition, DNNs show interesting commonalities with the primate ventral stream, with a progression of representations that is surprisingly similar to what is seen in monkeys and humans for brief stimulus presentations (Cadieu et al, 2014; Yamins et al, 2014; Güçlü and van Gerven, 2015; Kalfas et al, 2017, 2018; Pospisil et al, 2018; Bashivan et al, 2019), as such capturing important aspects of object recognition and perceived shape similarity (Yamins et al, 2014; Kubilius et al, 2016; Kalfas et al, 2018). The architecture of these computational models is composed of series of convolutional layers that perform local filtering operations, followed by fully connected layers, which gradually transforms pixel level inputs into a high-level representational space where object categories are linearly separable.…”
Section: Introductionmentioning
confidence: 77%
“…The animacy division is considered one of the main organizational principles in visual cortex (e.g., Grill-Spector and Weiner, 2014), but information content underlying this division is highly debated (Baldassi et al, 2013; Grill-Spector and Weiner, 2014; Nasr et al, 2014; Bracci and Op de Beeck, 2016; Bracci et al, 2017b; Kalfas et al, 2017). Animacy and other category distinctions are often correlated with a range of low- and higher-level visual features such as the spatial frequency spectrum (Nasr et al, 2014; Rice et al, 2014) and shape (Cohen et al, 2014; Jozwik et al, 2016), but the animacy structure remains even when dissociated from such features (Bracci and Op de Beeck, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…A fundamental goal in visual neuroscience is to reach a deep understanding of the neural code underlying object representations; how does the brain represent objects we perceive around us? Over the years, research has characterized object representations in the primate brain in terms of their content for a wide range of visual and semantic object properties such as shape, size, or animacy (Konkle and Oliva, 2012; Nasr et al, 2014; Bracci and Op de Beeck, 2016; Kalfas et al, 2017). More recently, our understanding of these multidimensional object representations has been lifted to a higher level by the advent of so-called deep-convolutional neural networks (DNNs) that do not only reach human behavioral performance in image categorization (Russakovsky et al, 2014; He et al, 2015; Kheradpisheh et al, 2016a), but also appear to develop representations that share many of the properties of primate object representations (Cadieu et al, 2014; Guclu and van Gerven, 2014; Khaligh-Razavi and Kriegeskorte, 2014; Yamins et al, 2014; Guclu and van Gerven, 2015; Kubilius et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…A fundamental goal in visual neuroscience is to reach a deep understanding of the neural code underlying object representa-tions-how does the brain represent objects we perceive around us? Over the years, research has characterized object representations in the primate brain in terms of their content for a wide range of visual and semantic object properties such as shape, size, or animacy (Konkle and Oliva, 2012;Nasr et al, 2014;Bracci and Op de Beeck, 2016;Kalfas et al, 2017). More recently, our understanding of these multidimensional object representations has been lifted to a higher level by the advent of so-called deep-convolutional neural networks (DNNs) that do not only reach human behavioral performance in image categorization (Russakovsky et al, 2014;He et al, 2015;Kheradpisheh et al, 2016a), but also appear to develop representations that share many of the properties of primate object representations (Cadieu et al, 2014;Güçlü and van Gerven, 2014;Khaligh-Razavi and Kriegeskorte, 2014;Yamins et al, 2014;Güçlü and van Gerven, 2015;Kubilius et al, 2016).…”
Section: Introductionmentioning
confidence: 99%