2020
DOI: 10.1016/j.visres.2019.12.001
|View full text |Cite
|
Sign up to set email alerts
|

Can machine learning account for human visual object shape similarity judgments?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…However, similarity judgments break down for unfamiliar objects. For example, German and Jacobs (2020) measured human similarity judgments between pairs of novel part-based naturalistic objects (fribbles) presented across multiple viewpoints. These judgments were then compared with the similarities observed in DNNs in response to the same stimuli.…”
Section: The Empirical Problem With Claiming Dnns and Human Vision Ar...mentioning
confidence: 99%
“…However, similarity judgments break down for unfamiliar objects. For example, German and Jacobs (2020) measured human similarity judgments between pairs of novel part-based naturalistic objects (fribbles) presented across multiple viewpoints. These judgments were then compared with the similarities observed in DNNs in response to the same stimuli.…”
Section: The Empirical Problem With Claiming Dnns and Human Vision Ar...mentioning
confidence: 99%
“…In spite of these successes when tested on well known psychological phenomena, DNNs often fail on the most basic perceptual properties exhibited by humans. For example, DNNs do not possess human-like shape bias (Geirhos et al, 2018;Malhotra and Bowers, 2019), they appear to discriminate categories based on local instead of global features (Baker et al, 2018b;Malhotra et al, 2021), are much more susceptible to a low amount of image degradation (Geirhos et al, 2018), do not account for humans' similarity judgments of 3D shapes (German and Jacobs, 2020), and fail to support basic visual reasoning such as classifying images as the same or different (Puebla and Bowers, 2021). In addition, DNNs often act in surprising non-human-like ways, such as being fooled by adversarial images (Szegedy et al, 2013;Dujmović et al, 2020) and make bizarre classification errors to familiar objects in unusual poses (Kauderer-Abrams, 2017;Gong et al, 2014;Chen et al, 2017).…”
Section: Neural Network As a Model Of The Human Visual Systemmentioning
confidence: 99%
“…On one hand, CNNs' internal activations are predictive of brain recordings in response to images taken from classic benchmark datasets in the mid and high levels of the ventral visual pathway (Yamins et al, 2013(Yamins et al, , 2014Cichy et al, 2016). On the other hand, CNNs show some behavioural discrepancies that undermine their plausibility as a model of the visual system: they are susceptible to adversarial attacks (Szegedy et al, 2013;Dujmović et al, 2020); they are highly susceptible to low amount of image degradation (Geirhos et al, 2018); they often classify images based on textures instead of shape (Geirhos et al, 2018) and local instead of global features (Baker et al, 2018), and do not account for humans' similarity judgments of 3D shapes (German and Jacobs, 2020). Within the framework of this debate, we aim to answer the question of whether CNNs can learn to support online invariance for a wide variety of transformations commonly experienced in the human visual environment.…”
Section: Introductionmentioning
confidence: 99%