2022
DOI: 10.1038/s42003-022-03711-3
|View full text |Cite
|
Sign up to set email alerts
|

Real-world size of objects serves as an axis of object space

Abstract: Our mind can represent various objects from physical world in an abstract and complex high-dimensional object space, with axes encoding critical features to quickly and accurately recognize objects. Among object features identified in previous neurophysiological and fMRI studies that may serve as the axes, objects’ real-world size is of particular interest because it provides not only visual information for broad conceptual distinctions between objects but also ecological information for objects’ affordance. H… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

7
15
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(23 citation statements)
references
References 54 publications
7
15
1
Order By: Relevance
“…In contrast to prior work, we observed neural selectivity for real-world size for both animate and inanimate object categories (Huang et al, 2022, Julian et al, 2017, Konkle and Caramazza, 2013, Konkle and Oliva, 2012). Moreover, when comparing the qualitative neural patterns associated with animate and inanimate objects, we see a high degree of agreement in real-world size selectivity, but with the magnitude of the effect for animate objects being weaker (which may be one reason why prior studies did not detect the effect).…”
Section: Discussioncontrasting
confidence: 99%
See 1 more Smart Citation
“…In contrast to prior work, we observed neural selectivity for real-world size for both animate and inanimate object categories (Huang et al, 2022, Julian et al, 2017, Konkle and Caramazza, 2013, Konkle and Oliva, 2012). Moreover, when comparing the qualitative neural patterns associated with animate and inanimate objects, we see a high degree of agreement in real-world size selectivity, but with the magnitude of the effect for animate objects being weaker (which may be one reason why prior studies did not detect the effect).…”
Section: Discussioncontrasting
confidence: 99%
“…First, in contrast to prior studies of real-world size, our study employs a diverse dataset of natural images, where each object is presented in contextually-relevant and diverse backgrounds. Our present results are largely (sic) consistent with past results on the coarse-scale organization of size-selective neural populations that relied on single, isolated objects as stimuli (Huang al., 2022, Julian et al, 2017, Konkle and Caramazza, 2013, Konkle and Oliva, 2012. Second, our more ecologically-grounded approach reveals that size selectivity is present for both animate and inanimate objects, which contrasts with prior studies that suggested that size selectivity was exclusive to inanimate objects (Konkle and Caramazza, 2013).…”
Section: Experiments 3: Low-level Features and Size Selectivitysupporting
confidence: 92%
“…To such an (Gibsonian) animal, the prerequisite of being considered as an "object" would be to afford interaction, and an object beyond the manageable size range would not have common affordance. This speculation is in line with previous fMRI studies where large objects activate the medial part of the ventral temporal cortex (Huang et al, 2022;Magri et al, 2021), overlapped with the parahippocampus gyrus involved in representing scenes (Park et al, 2011;Troiani et al, 2014), and smaller objects activate the lateral part, such as the pFs where the congruency effect of affordance was identified. In our fMRI experiment, we found that the congruency effect of affordance is only evident for objects within the range of the body size, but not for objects beyond, suggesting that affordance is typically represented only for objects within the body size.…”
Section: Discussionsupporting
confidence: 91%
“…The similarity matrix based on the real-world size of the objects was constructed by the consistency between pairs of size ranks corresponding to each object, which was defined as (Huang et al, 2022): where i and j are indicators to denote the object’s size rank.…”
Section: Methodsmentioning
confidence: 99%
“…In so doing, they could predict the tuning of previously uncharted regions of the primate visual cortex based on the major dimensions of the deep neural network feature space, and they linked animacy and object protrusion distinctions to the major principal components of this DNN space. Relatedly, Huang et al, (2022) have found that information about the real-world size of objects is encoded along the second principal component of the late stages of deep neural networks. Further, Vinken et al, 2022 recently demonstrated that face-selective neurons in IT could be accounted for by the feature tuning learned in these same object-trained deep neural networks (also see Prince & Konkle, 2020; Murty et al, 2021; Khosla & Wehbe, 2022).…”
mentioning
confidence: 99%