2020
DOI: 10.31234/osf.io/25t76
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparing memory capacity across stimuli requires maximally dissimilar foils: Using deep convolutional neural networks to understand visual working memory capacity for real-world objects

Abstract: The capacity of visual working memory has been subject to considerable debate: whether there is a relatively fixed item limit, regardless of what these items are; or a fixed resource limit; or whether capacity limits vary depending on the complexity or familiarity of items. Here, we argue that before asking the question of how capacity varies across different stimuli, it is necessary to establish a methodology that allows a fair comparison between distinct stimulus sets. One of the most important factors deter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 18 publications
(48 citation statements)
references
References 40 publications
5
43
0
Order By: Relevance
“…Deep convolutional neural networks are useful models of human recognition and the human visual system (e.g., Yamins et al, 2014 ), and deep nets trained on categorization are sensitive to some extent to both visual and semantic features (e.g., Jozwik et al, 2017 ; Peterson et al, 2018 ). Such measures also reliably predict memory confusability (Brady & Störmer, 2020 ). Here, we found they captured the categorical structure of the individual images, and that they indicated that artifacts would be more similar to the search targets than the other stimuli, providing further validation of our stimulus design.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep convolutional neural networks are useful models of human recognition and the human visual system (e.g., Yamins et al, 2014 ), and deep nets trained on categorization are sensitive to some extent to both visual and semantic features (e.g., Jozwik et al, 2017 ; Peterson et al, 2018 ). Such measures also reliably predict memory confusability (Brady & Störmer, 2020 ). Here, we found they captured the categorical structure of the individual images, and that they indicated that artifacts would be more similar to the search targets than the other stimuli, providing further validation of our stimulus design.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, to estimate the pairwise image similarity, we made use of a pre-trained deep convolutional neural network (CNN). CNNs are useful metrics of object similarity for predicting memory (Brady & Störmer, 2020 ), and have consistently been shown to provide some level of match to the human visual system (Yamins et al, 2014 ). As such models are trained to perform categorization, the high-level features in them contain both semantic and visual information, but are primarily visual in the sense that the network has no broader conceptual knowledge beyond categorical classification.…”
Section: Stimulus Validationmentioning
confidence: 99%
“…For example, Brady et al (2016) showed a boost in performance for real-world objects that was attributable to more active storage in visual working memory, consistent with a theory where additional high-level information about such objects, perhaps in the ventral stream, is maintained in working memory in addition to low-level information. Some recent studies (Li, Xiong, Theeuwes, & Wang, 2020;Quirk, Adam, & Vogel, 2020) instead found no difference between storing simple features and real-world objects in visual working memory, but these results were likely due to a lack of control for similarity between targets and foils in the color versus real-world object tasks (Brady & Störmer, 2020;Brady & Störmer, in press). With better control for target-foil similarity (Brady & Störmer, 2020), real-world objects result in significantly better performance compared with simple features (Brady & Störmer, 2020;Brady & Störmer, in press).…”
Section: Are Real-world Objects Likely To Be Stored Holistically In Vmentioning
confidence: 93%
“…Looking at the performance on the exemplar task compared with the exemplar-state task, one could argue that there are differences in the exemplar and state manipulations themselves that account for this effect. For example, it intuitively seems that two different states of the same object might be more visually similar than two different exemplars in the same state, and that this could affect the two-alternative forced choice task performance (e.g., Brady & Störmer, 2020). In this case, the exemplar-state task would be harder than the exemplar task based on the images alone, rather than because of binding difficulties.…”
Section: Similarity Between Exemplar and State Pairsmentioning
confidence: 99%
See 1 more Smart Citation