2019
DOI: 10.1101/626374
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Levels of Representation in a Deep Learning Model of Categorization

Abstract: Deep convolutional neural networks (DCNNs) rival humans in object recognition. The layers (or levels of representation) in DCNNs have been successfully aligned with processing stages along the ventral stream for visual processing. Here, we propose a model of concept learning that uses visual representations from these networks to build memory representations of novel categories, which may rely on the medial temporal lobe (MTL) and medial prefrontal cortex (mPFC). Our approach opens up two possibilities: a) for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 66 publications
0
14
0
Order By: Relevance
“…The classes have been intentionally selected to overlap with 200 classes from ImageNet. These images can reduce DCNN performance drastically by exploiting the vulnerabilities of these networks such as their colour and texture biases (Guest and Love 2019;Hu et al 2018). Although adversarial attacks have been heavily studied (Goodfellow et al 2015;Nguyen et al 2015;Song et al 2018), these works use synthetic or unrealistic images that are carefully designed to defeat advanced DCNNs.…”
Section: Experiments 3: Natural Adversarial Imagesmentioning
confidence: 99%
“…The classes have been intentionally selected to overlap with 200 classes from ImageNet. These images can reduce DCNN performance drastically by exploiting the vulnerabilities of these networks such as their colour and texture biases (Guest and Love 2019;Hu et al 2018). Although adversarial attacks have been heavily studied (Goodfellow et al 2015;Nguyen et al 2015;Song et al 2018), these works use synthetic or unrealistic images that are carefully designed to defeat advanced DCNNs.…”
Section: Experiments 3: Natural Adversarial Imagesmentioning
confidence: 99%
“…In SHJ Set 2 (from Crump et al (2013)), the images are varying geometric shapes on a black background with a green border, with the three binary features being size (large or small), shape (square or triangle), and color (black or white). Finally, in SHJ Set 3 (from Guest and Love (2019)), the three binary features are the same as SHJ Set 2 but with different values and visual appearance, with size (large or small), shape (circle or square), and color (red or blue). Additionally, for each dataset, there are six different mappings (permutations) between the abstract and image-based features.…”
Section: Methodsmentioning
confidence: 99%
“…For instance, CNNs pre-trained for image classification have shown success in predicting category typicality ratings (Lake, Zaremba, Fergus, & Gureckis, 2015) and similarity ratings from natural images (Peterson, Abbott, & Griffiths, 2018). More recently, researchers have begun to combine CNNs with classic prototype and exemplar models of categorization Guest & Love, 2019;Singh, Peterson, Battleday, & Griffiths, 2020;Nosofsky, Meagher, & Kumar, 2020), usually with the aim of predicting human categorization decisions for images of common categories such as animals and vehicles.…”
Section: Introductionmentioning
confidence: 99%
“…Previous researchers have used various mapping methods to gain insight into neural network activity based on the activation of network layers. For instance, previous work has used stimulus-decoding analyses and activation similarity to probe for features represented by networks and to gain insight into the processing stages corresponding to layers of the networks (e.g., Ettinger, Elgohary & Resnik, 2016 ; Qian, Qiu & Huang, 2016 ; Hupkes, Veldhoen & Zuidema, 2018 ; Guest & Love, 2019 ; Lakretz et al, 2019 ; Tenney, Das & Pavlick, 2019 ).…”
Section: Related Workmentioning
confidence: 99%