2020
DOI: 10.1101/2020.07.17.206896
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Untangling the animacy organization of occipitotemporal cortex

Abstract: Some of the most impressive functional specialization in the human brain is found in occipitotemporal cortex (OTC), where several areas exhibit selectivity for a small number of visual categories, such as faces and bodies, and spatially cluster based on stimulus animacy. Previous studies suggest this animacy organization reflects the representation of an intuitive taxonomic hierarchy, distinct from the presence of face- and body-selective areas in OTC. Using human fMRI, we investigated the independent contribu… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 86 publications
(48 reference statements)
1
2
0
Order By: Relevance
“…As an example, view-invariant features represented in face- [ 54 , 55 ] and hand-selective regions [ 56 , 57 ] reflect domain-specific computations: the former to support identity recognition [ 58 ], the latter to support action understanding [ 56 ]. In agreement, our results show that in addition to a large division between animal and scene representations, within each domain, representational content reflects the type of computations these networks support: animacy features in animal-selective areas [ 4 , 5 , 59 ] and layout navigational properties in scene-selective areas [ 7 , 60 ]. We can show this representational diversity because in our study we included separate behavioral-relevant dimensions for objects (i.e., animacy continuum) and background scenes (i.e., navigational properties), while this was typically not done in previous studies.…”
Section: Discussionsupporting
confidence: 83%
See 1 more Smart Citation
“…As an example, view-invariant features represented in face- [ 54 , 55 ] and hand-selective regions [ 56 , 57 ] reflect domain-specific computations: the former to support identity recognition [ 58 ], the latter to support action understanding [ 56 ]. In agreement, our results show that in addition to a large division between animal and scene representations, within each domain, representational content reflects the type of computations these networks support: animacy features in animal-selective areas [ 4 , 5 , 59 ] and layout navigational properties in scene-selective areas [ 7 , 60 ]. We can show this representational diversity because in our study we included separate behavioral-relevant dimensions for objects (i.e., animacy continuum) and background scenes (i.e., navigational properties), while this was typically not done in previous studies.…”
Section: Discussionsupporting
confidence: 83%
“…For instance, in scene selective areas, the degree of navigational layout well characterizes its representational content which is relevant for naviation [ 7 , 8 ]. In a similar fashion, in animal selective areas, the degree of animacy [ 47 , 48 ], and the animal-specific features [ 4 , 5 ] might be relevant to support social-related computations. In our results, DCNN’s mid-layers show domain division for animals and scenes ( Fig 3A ), but does this division embed rich domain-specific object spaces like those observed in the human visual cortex?…”
Section: Resultsmentioning
confidence: 99%
“…Neural RDMs for the different ROIs, for each subject, were constructed using the (non-crossvalidated) Mahalanobis distance as the dissimilarity metric, characterized as the pairwise distance along the discriminant between conditions for the beta weight patterns in an ROI (Ritchie and Op de Beeck, 2019;Walther et al 2016). To assess the between-subject reliability of these RDMs, the RDM of one subject was left-out, and those of the remaining subjects averaged and Pearson's r correlated with the left-out subject's RDM.…”
Section: Representational Similarity Analysismentioning
confidence: 99%