2017
DOI: 10.1101/228932
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The ventral visual pathway represents animal appearance over animacy, unlike human behavior and deep neural networks

Abstract: 21Recent studies showed general agreement between object representations in the ventral visual pathway and 22 representational similarities in behavior and deep neural networks. In this fMRI study we challenge this state-of-23 the-art by dissociating object appearance (how does the object look like?) from object category (which object 24 category is it?). The stimulus set includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), 25and, crucially, inanimate objects that look like the anim… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
53
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(57 citation statements)
references
References 63 publications
4
53
0
Order By: Relevance
“…Essentially, this type of models learns to extract visual features that are consistently denoted through a given word; for example on the basis of a large number of examples, the model will learn the visual representation of a table by individuating the typical visual patterns found in the images associated with the label "table". The cognitive plausibility of visual representations obtained from this type of models has been validated in previous studies, which showed that simulations based on estimates from CNNs are in line with explicit intuitions of human participants (Bracci, Kalfas, & de Beeck, 2017;Lazaridou et al, 2017).…”
Section: Image-based Estimates Of Visual Similaritysupporting
confidence: 57%
“…Essentially, this type of models learns to extract visual features that are consistently denoted through a given word; for example on the basis of a large number of examples, the model will learn the visual representation of a table by individuating the typical visual patterns found in the images associated with the label "table". The cognitive plausibility of visual representations obtained from this type of models has been validated in previous studies, which showed that simulations based on estimates from CNNs are in line with explicit intuitions of human participants (Bracci, Kalfas, & de Beeck, 2017;Lazaridou et al, 2017).…”
Section: Image-based Estimates Of Visual Similaritysupporting
confidence: 57%
“…In addition to similarity, other psychological concepts that have been tested in CNNs include typicality [31], Gestalt principles [32], and animacy [33]. In [127], a battery of tests inspired by findings in visual psychophysics were applied to CNNs and CNNs were found to be similar to biological vision according to roughly half of them.…”
Section: Comparison At the Behavioral Levelmentioning
confidence: 99%
“…Such models are originally trained to predict an image label from a vector representation encoding the pixel-based RGB values of the respective image (see the upper-right part of Figure 2), and have reached impressive levels of performance in this task (Chatfield, Simonyan, Vedaldi, & Zisserman, 2014;Krizhevsky et al, 2012). Furthermore, representations obtained from such models have been validated as measures of visual similarity (Petilli, Günther, Vergallito, Ciaparelli, & Marelli, 2019), and it has been shown that they closely correspond to human intuitions (Bracci, Ritchie, Kalfas, & de Beeck, 2019;Lazaridou, Marelli, & Baroni, 2017;Phillips et al, 2018;Zhang, Isola, Efros, Shechtman, & Wang, 2018).…”
Section: Vision-based Representationsmentioning
confidence: 99%