2018
DOI: 10.1016/j.neuropsychologia.2018.09.016
|View full text |Cite
|
Sign up to set email alerts
|

No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing

Abstract: for valuable help with data collection. We thank Sage Boettcher for comments and discussion. We also would like to thank the two anonymous reviewers for their extremely constructive suggestions and comments.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
59
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(63 citation statements)
references
References 51 publications
4
59
0
Order By: Relevance
“…Sensitivity to spatial structure emerged after 255 ms of processing, which is only after scene‐selective peaks in ERPs (Harel et al, ; Sato et al, ) and after basic scene attributes are computed (Cichy, Khosla, Pantazis, & Oliva, ). Interestingly, after 250 ms brain responses not only become sensitive to scene structure, but also to object‐scene consistencies (Draschkow et al, ; Ganis & Kutas, ; Mudrik et al, ; Võ & Wolfe, ). Together, these results suggest a dedicated processing stage for the structural analysis of objects, scenes, and their relationships, which is different from basic perceptual processing.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Sensitivity to spatial structure emerged after 255 ms of processing, which is only after scene‐selective peaks in ERPs (Harel et al, ; Sato et al, ) and after basic scene attributes are computed (Cichy, Khosla, Pantazis, & Oliva, ). Interestingly, after 250 ms brain responses not only become sensitive to scene structure, but also to object‐scene consistencies (Draschkow et al, ; Ganis & Kutas, ; Mudrik et al, ; Võ & Wolfe, ). Together, these results suggest a dedicated processing stage for the structural analysis of objects, scenes, and their relationships, which is different from basic perceptual processing.…”
Section: Discussionmentioning
confidence: 99%
“…Related studies on object‐object and object‐scene consistencies typically yield large effect sizes which exceed this value, both for fMRI responses, d = 0.72 (Brandman & Peelen, ), d = 0.67 (Kaiser & Peelen, ), d = 2.14 (Kim & Biederman, ), d = 0.94 (Roberts & Humphreys, ), and EEG responses, d = 0.71 (Draschkow, Heikel, Võ, Fiebach, & Sassenhagen, ), d = 0.88 (Ganis & Kutas, ), d = 0.67 (Mudrik, Lamy, & Deouell, ), d = 0.69 (Võ & Wolfe, ).…”
mentioning
confidence: 99%
“…The preprocessing and analysis scripts for both experiments can be found as html files and as reproducible scripts (jupyter notebooks; (Kluyver et al, 2016) at https://github.com/SageBoettcher/identityTemplates. The preprocessing pipeline is modified from the analysis pipeline used by Draschkow and colleagues (Draschkow et al, 2018). All EEG data analysis was conducted in MNE-Python (Gramfort et al, 2013).…”
Section: Eeg Acquisition (Experiments 1 and 2)mentioning
confidence: 99%
“…In eventrelated potentials (ERPs), it is commonly found that scene-inconsistent objects elicit a larger negative brain response compared to consistent ones. This long-lasting negative shift typically starts as early as 200-250 ms after stimulus onset (Mudrik, Shalgi, Lamy, & Deouell, 2014;Draschkow, Heikel, Võ, Fiebach, & Sassenhagen, 2018) and has its maximum at frontocentral scalp sites, in contrast to the centroparietal N400 effect for words (e.g., Kutas & Federmeier, 2011). The effect was found when the object appeared at a cued location after the scene background was already shown (Ganis & Kutas, 2003), for objects that were photoshopped into the scene (Mudrik, Lamy, & Deouell, 2010;Mudrik, et al, 2014;Coco, Araujo, & Petersson, 2017), and for objects that were part of realistic photographs (Võ & Wolfe, 2013).…”
Section: Introductionmentioning
confidence: 94%
“…The earlier part of the negative response, usually referred to as N300, has been taken to reflect the context-dependent difficulty of object identification, whereas the later N400 has been linked to semantic integration processes after the object is identified (e.g., Dyck & Brodeur, 2015). The present study was not designed to differentiate between these two subcomponents, especially considering that their scalp distribution is strongly overlapping or even topographically indistinguishable (Draschkow et al, 2018). Thus, for reasons of simplicity, we will in most cases simply refer to all frontocentral negativities as "N400" in the current study, but this term is meant to include the earlier N300 part of the effect.…”
Section: Introductionmentioning
confidence: 99%