2016
DOI: 10.1016/j.neuroimage.2016.03.027
|View full text |Cite
|
Sign up to set email alerts
|

Visual information representation and rapid-scene categorization are simultaneous across cortex: An MEG study

Abstract: Perceiving the visual world around us requires the brain to represent the features of stimuli and to categorize the stimulus based on these features. Incorrect categorization can result either from errors in visual representation or from errors in processes that lead to categorical choice. To understand the temporal relationship between the neural signatures of such systematic errors, we recorded whole-scalp magnetoencephalography (MEG) data from human subjects performing a rapid-scene categorization task. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
26
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 23 publications
(32 citation statements)
references
References 64 publications
5
26
1
Order By: Relevance
“…Our decoding results revealed that decodable scene category information peaked between 150 and 200 ms after image onset and persisted across the trial epoch. These values are consistent with previous M/EEG studies of object-and scene categorization (Bankson, Hebart, Groen, & Baker, 2018;Carlson, Tovar, Alink, & Kriegeskorte, 2013;Cichy, Pantazis, & Oliva, 2014;Clarke, Taylor, Devereux, Randall, & Tyler, 2013;Ramkumar et al, 2016). While earlier decoding has been reported for image exemplars (~100 ms, (Carlson et al, 2013;Cichy et al, 2014)), it has remained unclear whether this performance reflects image identity per se, or the lower-level visual features that are associated with that exemplar.…”
Section: Discussionsupporting
confidence: 90%
See 3 more Smart Citations
“…Our decoding results revealed that decodable scene category information peaked between 150 and 200 ms after image onset and persisted across the trial epoch. These values are consistent with previous M/EEG studies of object-and scene categorization (Bankson, Hebart, Groen, & Baker, 2018;Carlson, Tovar, Alink, & Kriegeskorte, 2013;Cichy, Pantazis, & Oliva, 2014;Clarke, Taylor, Devereux, Randall, & Tyler, 2013;Ramkumar et al, 2016). While earlier decoding has been reported for image exemplars (~100 ms, (Carlson et al, 2013;Cichy et al, 2014)), it has remained unclear whether this performance reflects image identity per se, or the lower-level visual features that are associated with that exemplar.…”
Section: Discussionsupporting
confidence: 90%
“…While this generally held true for the nine models used here, it should be noted that the gist features (Oliva & Torralba, 2001) were an exception (see Figure 1). Therefore, we have refrained from strongly interpreting results for that model, particularly the observation that this feature was not significantly predictive of the behavioral RDM (see Table 1) given previous reports that gist features can strongly influence categorization behavior (M. ), vERPs (Hansen, Noesen, Nador, & Harel, 2018), MEG patterns (Ramkumar et al, 2016), and fMRI activation patterns (Watson et al, 2017(Watson et al, , 2014.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…While these results point to relatively late effects, high-level categorization has been shown to occur at shorter latencies in the visual system of primates, especially when tested using sensitive multivariate methods (Cauchoix et al, 2016). In humans, multivariate pattern analysis (MVPA) of non-invasive electrophysiological data has shown potential to achieve a similar level of sensitivity, demonstrating rapid categorization along the ventral stream (Cauchoix, Barragan-Jason, Serre, & Barbeau, 2014;Isik, Meyers, Leibo, & Poggio, 2014;Ramkumar, Hansen, Pannasch, & Loschky, 2016). Fast decoding of object category was achieved at 100 ms from small neuronal populations in primates (Hung & Poggio, 2005) and from invasively recorded responses in human visual cortex (Li & Lu, 2009).…”
Section: Introductionmentioning
confidence: 99%