2017
DOI: 10.1167/17.4.9
|View full text |Cite
|
Sign up to set email alerts
|

Central and peripheral vision for scene recognition: A neurocomputational modeling exploration

Abstract: What are the roles of central and peripheral vision in human scene recognition? Larson and Loschky (2009) showed that peripheral vision contributes more than central vision in obtaining maximum scene recognition accuracy. However, central vision is more efficient for scene recognition than peripheral, based on the amount of visual area needed for accurate recognition. In this study, we model and explain the results of Larson and Loschky (2009) using a neurocomputational modeling approach. We show that the adva… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(36 citation statements)
references
References 103 publications
(137 reference statements)
1
34
0
Order By: Relevance
“…The field of deep learning has traditionally focused on feedforward models of visual processing. These models have been used to describe neural responses in the ventral stream of humans and other primates (Cadieu et al, 2014;Güçlü and van Gerven, 2015;Yamins and DiCarlo, 2016;Wang and Cottrell, 2017) and have resulted in many practical successes (Gu et al, 2017). More recently, convolutional neural networks that include recurrent connections (both lateral and top-down) have also been proposed (Spoerer et al, 2017).…”
Section: Application: Image Classificationmentioning
confidence: 99%
“…The field of deep learning has traditionally focused on feedforward models of visual processing. These models have been used to describe neural responses in the ventral stream of humans and other primates (Cadieu et al, 2014;Güçlü and van Gerven, 2015;Yamins and DiCarlo, 2016;Wang and Cottrell, 2017) and have resulted in many practical successes (Gu et al, 2017). More recently, convolutional neural networks that include recurrent connections (both lateral and top-down) have also been proposed (Spoerer et al, 2017).…”
Section: Application: Image Classificationmentioning
confidence: 99%
“…Also, half of these videoclips contain sub-sequences that were previously annotated as acting. To approximate as realistically as possible the visual field of attention of the driver, sampled videoclips are pre-processed following the procedure in [71]. As in [71] we leverage the Space Variant Imaging Toolbox [48] to implement this phase, setting the parameter that halves the spatial resolution every 2.3 • to mirror human vision [36], [71].…”
Section: Visual Assessment Of Predicted Fixation Mapsmentioning
confidence: 99%
“…The resulting videoclip preserves details near to the fixation points in each frame, whereas the rest of the scene gets more and more blurred getting farther from fixations until only low-frequency contextual information survive. Coherently with [71] we refer to this process as foveation (in analogy with human foveal vision). Thus, pre-processed videoclips will be called foveated videoclips from now on.…”
Section: Visual Assessment Of Predicted Fixation Mapsmentioning
confidence: 99%
“…Most theories or models of scene-gist recognition are relatively silent on this issue Fei-Fei, VanRullen, Koch, & Perona, 2005;Oliva, 2005;. Thus, a thorough understanding of scene-gist recognition and computational models of rapid scene categorization must take into account differences in visual processing between central and peripheral vision (Wang & Cottrell, 2017).…”
Section: Scene-gist Recognition From Central To Peripheral Visionmentioning
confidence: 99%