2016
DOI: 10.1167/16.5.10
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced single-trial probes of visual working memory for irrelevant features

Abstract: We measured the precision with which an irrelevant feature of a relevant object is stored in visual short-term memory. In each experiment, 600 online subjects each completed 30 trials in which the same feature (orientation or color) was relevant, followed by a single surprise trial in which the other feature was relevant. Pooling data across all subjects, we find in a delayed-estimation task but not in a change localization task that the irrelevant feature is retrieved, but with much lower precision than when … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
31
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(33 citation statements)
references
References 25 publications
2
31
0
Order By: Relevance
“…Experiments may also require recruiting participants from disparate cultural backgrounds (Curtis and Bharucha, 2009; Henrich et al, 2010) that are more readily recruited online than in person. Alternatively, it may be desirable to run only a small number of trials on each participant, or even just a single critical trial (Simons and Chabris, 1999; Shin and Ma, 2016), after which the participant may become aware of the experiment’s purpose. In all of these cases recruiting adequate sets of participants in the lab might be prohibitively difficult, and online experiments facilitated by a headphone check could be a useful addition to a psychoacoustician’s toolbox.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Experiments may also require recruiting participants from disparate cultural backgrounds (Curtis and Bharucha, 2009; Henrich et al, 2010) that are more readily recruited online than in person. Alternatively, it may be desirable to run only a small number of trials on each participant, or even just a single critical trial (Simons and Chabris, 1999; Shin and Ma, 2016), after which the participant may become aware of the experiment’s purpose. In all of these cases recruiting adequate sets of participants in the lab might be prohibitively difficult, and online experiments facilitated by a headphone check could be a useful addition to a psychoacoustician’s toolbox.…”
Section: Discussionmentioning
confidence: 99%
“…This makes behavioral research highly accessible and efficient, and the ability to obtain data from large samples or diverse populations allows new kinds of questions to be addressed. Crowdsourcing has become popular in a number of subfields within cognitive psychology (Buhrmester et al, 2011; Crump et al, 2013), including visual perception (Brady and Alvarez, 2011; Freeman et al, 2013; Shin and Ma, 2016), cognition (Frank and Goodman, 2012; Hartshorne and Germine, 2015), and linguistics (Sprouse, 2010; Gibson et al, 2011; Saunders et al, 2013). Experimenters in these fields have developed methods to maximize the quality of web-collected data (Meade and Bartholomew, 2012; Chandler et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Memory for different features can be differentially affected by retro‐cues indicating which feature dimension is going to be tested (Park, Sy, Hong, & Tong, ), supporting some independence in storing different features of the same items, but also a trade‐off of capacity between them. A number of studies have also investigated to what degree task‐irrelevant features are memorized, finding low‐precision, but above‐chance performance in surprise tests (Shin & Ma, , ; Swan et al ., ), some degree of interference from irrelevant feature changes (Gao, Gao, Li, Sun, & Shen, ; Hyun, Woodman, Vogel, Hollingworth, & Luck, ; Shen, Tang, Wu, Shui, & Gao, ), and limitations on the ability to ignore features of specific items in mixed displays (Marshall & Bays, ; Vidal, Gauchou, Tallon‐Baudry, & Oregan, ).…”
Section: Objects and Featuresmentioning
confidence: 99%
“…Furthermore, in the cognitive psychology literature, surprise test methodologies are an important tool for explicitly probing memory of stimuli that subjects did not expect to report. Inattentional blindness (Mack & Rock, 1998), change blindness (Simons & Levin, 1997), and attribute amnesia (Chen & Wyble, 2015a) have importantly shown the limitations of human visual processing by using surprise tests (e.g., Chen, Swan, & Wyble, 2016;Eitam, Shoval, & Yeshurun, 2015;Eitam, Yeshurun, Hassan, 2013;Shin & Ma, 2016;Swan, Collins, & Wyble, 2016). It is debated whether the inability to answer such surprise questions is due to a failure to encode the information (i.e., a failure of perception; Mack & Rock, 1998) or a loss of the contents of working memory (i.e., amnesia ;Jiang, Shupe, Swallow, & Tan, 2016;Wolfe, 1999).…”
mentioning
confidence: 99%