2019
DOI: 10.1167/19.3.8
|View full text |Cite
|
Sign up to set email alerts
|

Recall of facial expressions and simple orientations reveals competition for resources at multiple levels of the visual hierarchy

Abstract: Many studies of visual working memory have tested humans' ability to reproduce primary visual features of simple objects, such as the orientation of a grating or the hue of a color patch, following a delay. A consistent finding of such studies is that precision of responses declines as the number of items in memory increases. Here we compared visual working memory for primary features and high-level objects. We presented participants with memory arrays consisting of oriented gratings, facial expressions, or a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 64 publications
(82 reference statements)
1
2
0
Order By: Relevance
“…In addition, we have shown that perceptually-matched images that are perceived as a face are not only better remembered in a working memory task than those not perceived as a face, but also elicit a larger CDA (Asp, Störmer & Brady, 2019), once again showing that 'online' storage in working memory nearly always tracks behavioral performance in these tasks, rather than participants relying on a mix of memory systems only for some stimuli at some encoding times but not others. Consistent with this model of greater engagement of higher-level regions with meaningful stimuli, Salmela et al (2019) have shown that storing faces in memory results in the storage of both low-and high-level information about them, whereas simple orientation stimuli are stored in a solely low-level way. Furthermore, a significant literature has shown, using behavior alone, that familiarity and knowledge improve performance in short-term memory tasks even with perceptually well-matched or even identical stimuli (e.g., Alvarez & Cavanagh, 2004;Jackson & Raymond, 2008;Brady et al 2009;Curby et al 2009;Ngiam et al 2019;O'Donnell, Clement, & Brockmole, 2018;Sahar et al 2020;Starr, Srinivasan, Bunge, 2020).…”
Section: Introductionsupporting
confidence: 55%
“…In addition, we have shown that perceptually-matched images that are perceived as a face are not only better remembered in a working memory task than those not perceived as a face, but also elicit a larger CDA (Asp, Störmer & Brady, 2019), once again showing that 'online' storage in working memory nearly always tracks behavioral performance in these tasks, rather than participants relying on a mix of memory systems only for some stimuli at some encoding times but not others. Consistent with this model of greater engagement of higher-level regions with meaningful stimuli, Salmela et al (2019) have shown that storing faces in memory results in the storage of both low-and high-level information about them, whereas simple orientation stimuli are stored in a solely low-level way. Furthermore, a significant literature has shown, using behavior alone, that familiarity and knowledge improve performance in short-term memory tasks even with perceptually well-matched or even identical stimuli (e.g., Alvarez & Cavanagh, 2004;Jackson & Raymond, 2008;Brady et al 2009;Curby et al 2009;Ngiam et al 2019;O'Donnell, Clement, & Brockmole, 2018;Sahar et al 2020;Starr, Srinivasan, Bunge, 2020).…”
Section: Introductionsupporting
confidence: 55%
“…Sadness was not included since we expected sadness to be perceived very similarly to the neutral expression, especially when a sad mouth is combined with another expression in the eyes. Our previous study also suggests that sadness is the least precisely identified, discriminated and remembered expression [20].…”
Section: Stimulimentioning
confidence: 62%
“… Sims, Jacobs, & Knill, 2012 ; Wilken & Ma, 2004 ) and can be reproduced by Bayesian models that incorporate a prior reflecting the stimulus space (e.g. Salmela, Ölander, Muukkonen, & Bays, 2019 ). The present study did not distinguish whether the contraction bias comes from reporting the average of the tested feature values or a systematic bias to the center of the space, and future investigation is needed to clarify these two accounts.…”
Section: Discussionmentioning
confidence: 99%