2015
DOI: 10.1002/hbm.22984
|View full text |Cite
|
Sign up to set email alerts
|

Parietal cortex integrates contextual and saliency signals during the encoding of natural scenes in working memory

Abstract: The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
46
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(47 citation statements)
references
References 65 publications
(104 reference statements)
1
46
0
Order By: Relevance
“…As already discussed in Section 1.1, observers'tolerance to interstimulus delays during judgements of simultaneity of AV stimuli is modulated by the particular demands of thesimultaneity-judgement task (Stevenson and Wallace 2013; but this tolerance likewise depends on the stimulus category, see below). Collectively, these findings demonstrate that the role of matching multisensory stimulus features can be better understood in situations where the congruence of object features is task-irrelevant (Mastroberardino et al 2015;Santangelo et al 2015).…”
Section: Task-based Effectsmentioning
confidence: 73%
See 1 more Smart Citation
“…As already discussed in Section 1.1, observers'tolerance to interstimulus delays during judgements of simultaneity of AV stimuli is modulated by the particular demands of thesimultaneity-judgement task (Stevenson and Wallace 2013; but this tolerance likewise depends on the stimulus category, see below). Collectively, these findings demonstrate that the role of matching multisensory stimulus features can be better understood in situations where the congruence of object features is task-irrelevant (Mastroberardino et al 2015;Santangelo et al 2015).…”
Section: Task-based Effectsmentioning
confidence: 73%
“…Iordanescu et al 2009;van Ee et al 2009;Orchard-Mills et al 2013a, 2013bNardo et al 2014;Mastroberardino et al 2015).While these findings are in line with the influence of feature-based unisensory attention (Desimone and Duncan 1995), whether it applies to multisensory situations remains unclear. To investigate this possibility, Matusz and Eimer (2013) employed multi-stimulus visual displays and instructed participants to search for targets defined by a visual feature alone (e.g.…”
Section: Multisensory Processes Whose Occurrence Depends On Goalsmentioning
confidence: 86%
“…Perceiving the image as meaningful helps people perform a simple perceptual task—determining whether two Mooney images are identical or not. These behavioral improvements were related to differences in early visual processing (specifically, larger amplitudes of the P1 EEG signal, Samaha et al, 2016; see also Abdel Rahman and Sommer, 2008). Contra Pylyshyn’s (1999, p. 357) statement that “verbal hints [have] little effect on recognizing fragmented figures”, we find that not only do verbal hints greatly enhance recognition, but they facilitate visual discrimination.…”
Section: Perceiving the Same Input In Different Ways: Attentional Andmentioning
confidence: 96%
“…In case of the Mooney image depicted in the last row of Figure 3 , even superordinate linguistic cues like “animal” and “musical instrument” aid in recognition of the images. More specific cues (e.g., the word “trumpet”) are predictably more effective (Samaha et al, 2016). In other work, we have shown using hearing a verbal cue affects visual processing within 100 ms. of visual onset (Boutonnet and Lupyan, 2015), results that we interpret as showing that verbal cues activate visual representations, establishing “priors” that change how subsequent stimuli are processed (Edmiston and Lupyan, 2015, 2017; Lupyan and Clark, 2015).…”
Section: Perceiving the Same Input In Different Ways: Attentional Andmentioning
confidence: 99%
“…Following the seminal work of Itti and Koch, a diverse set of computational saliency approaches has been developed (for reviews, see e.g., Judd et al, 2012; Borji and Itti, 2013). Although research provided empirical support for saliency-based attention models (e.g., when humans freely viewed or memorized visual scenes; Parkhurst et al, 2002; Foulsham and Underwood, 2008), there are several recent studies indicating circumstances under which these models work less well or fail completely (e.g., under presence of top–down influences from visual search tasks; Foulsham and Underwood, 2007; Henderson et al, 2007; Einhäuser et al, 2008a; see also e.g., Stirk and Underwood, 2007; Einhäuser et al, 2008b; Santangelo, 2015; Santangelo et al, 2015; but see e.g., Borji et al, 2013; Spotorno et al, 2013). Importantly, prior research also shed light on the power of saliency-based predictions in the context of social information.…”
Section: Introductionmentioning
confidence: 99%