2019
DOI: 10.1371/journal.pcbi.1007001
|View full text |Cite
|
Sign up to set email alerts
|

Beyond core object recognition: Recurrent processes account for object recognition under occlusion

Abstract: Core object recognition, the ability to rapidly recognize objects despite variations in their appearance, is largely solved through the feedforward processing of visual information. Deep neural networks are shown to achieve human-level performance in these tasks, and explain the primate brain representation. On the other hand, object recognition under more challenging conditions (i.e. beyond the core recognition problem) is less characterized. One such example is object recognition under occlusion. It is uncle… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

15
94
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 86 publications
(111 citation statements)
references
References 98 publications
15
94
2
Order By: Relevance
“…To investigate the effect of network depth on scene segmentation, tests were conducted on seven deep residual networks (ResNets; [29] ) with increasing number of layers (6,10,18,34,50,101,152 Wilber [50] was used. In this implementation, input images from the ImageNet dataset [25] were 224x224 randomly cropped from a resized image using the scale and aspect ratio augmentation of Szegedy et al (2015) [51] .…”
Section: Networkmentioning
confidence: 99%
See 3 more Smart Citations
“…To investigate the effect of network depth on scene segmentation, tests were conducted on seven deep residual networks (ResNets; [29] ) with increasing number of layers (6,10,18,34,50,101,152 Wilber [50] was used. In this implementation, input images from the ImageNet dataset [25] were 224x224 randomly cropped from a resized image using the scale and aspect ratio augmentation of Szegedy et al (2015) [51] .…”
Section: Networkmentioning
confidence: 99%
“…Models . As in experiment 1, we used deep residual network architectures (ResNets; [29] ) with increasing number of layers (6,10,18,34). We did not use ResNets with more than 34 layers, as the simplicity of the task leads to overfitting problems for the 'ultra-deep' networks.…”
Section: Experiments 2: Training On Unsegmented/segmented Objectsmentioning
confidence: 99%
See 2 more Smart Citations
“…Together with representational similarity analysis, 2 MVPA has demonstrated promising capabilities in revealing subtle neural signature of cognitive processes in magnetoencephalography (MEG), electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data. [3][4][5][6][7][8][9][10][11][12][13][14][15] In many neuroimaging studies, 4,[16][17][18] the pattern vector dimension (number of M/EEG channels or fMRI voxels) highly exceeds the number of data samples (experimental trials), thus incurring the "curse of dimensionality," 19,20 which consequently deteriorates the classifier performance.…”
Section: Introductionmentioning
confidence: 99%