2017
DOI: 10.1101/238584
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MEG sensor patterns reflect perceptual but not categorical similarity of animate and inanimate objects

Abstract: High-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. What object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical properties by presenting perceptually matched objects that were easily recognizable as being animate or inanimate (e.g., snake and rope). In a series of behavioral experiments, three aspects of perceptua… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 40 publications
0
14
0
Order By: Relevance
“…In the current study, we characterised the representational dynamics of a large number of images in fast presentation sequences. Previous work has used MEG and EEG decoding to investigate representations of much smaller image sets using slow image presentation paradigms (Carlson et al, 2013;Cichy et al, 2014;Contini et al, 2017;Grootswagers, Ritchie, et al, 2017;Kaiser et al, 2016;Kaneshiro et al, 2015;Proklova et al, 2017;Ritchie, Tovar, & Carlson, 2015;Simanova, van Gerven, Oostenveld, & Hagoort, 2010); here we extend this work by looking at the representations of 200 objects during RSVP using standard 64-channel EEG. For 5Hz and 20Hz sequences, all 200 images could be decoded at four different categorical levels.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the current study, we characterised the representational dynamics of a large number of images in fast presentation sequences. Previous work has used MEG and EEG decoding to investigate representations of much smaller image sets using slow image presentation paradigms (Carlson et al, 2013;Cichy et al, 2014;Contini et al, 2017;Grootswagers, Ritchie, et al, 2017;Kaiser et al, 2016;Kaneshiro et al, 2015;Proklova et al, 2017;Ritchie, Tovar, & Carlson, 2015;Simanova, van Gerven, Oostenveld, & Hagoort, 2010); here we extend this work by looking at the representations of 200 objects during RSVP using standard 64-channel EEG. For 5Hz and 20Hz sequences, all 200 images could be decoded at four different categorical levels.…”
Section: Discussionmentioning
confidence: 99%
“…The current stimulus set consisted of segmented coloured objects, which were not matched on low-level features such as colour, orientation, shape, and size. Future work can build on the current paradigm using a stimulus set that for example contains orthogonal shape and category dimensions (Bracci, Kalfas, & Op de Beeck, 2017;Bracci & Op de Beeck, 2016;Proklova et al, 2017Proklova et al, , 2016, or test the decodability of these features using for example texture stimuli with similar features (Long, Konkle, Cohen, & Alvarez, 2016;Long, Yu, & Konkle, 2017). Such extensions can help unravel the relationship between object features and categories, and increase our understanding of how this inherent relationship guides categorical abstraction in the visual system.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In object perception, the decoding time course reflects the perceptual and categorical dissimilarity of stimuli, with more perceptually dissimilar stimuli (i.e. higher levels of abstraction) decodable later, and associated with computations performed by areas further along the visual-processing hierarchy (Proklova et al, 2019) (Carlson et al, 2013) (Cichy et al, 2014) (an example of increasing category abstraction is Dobermandoganimalanimate). One way of thinking about the relatively later decoding of hue, then, is that (1) hue discrimination involves greater perceptual dissimilarity (or greater category abstraction) than does luminance-contrast; and (2) it is computed either by circuits downstream of those that compute luminance contrast or requires more recurrent processing.…”
Section: Discussionmentioning
confidence: 99%
“…snake and rope) across category in order to examine the influence of perceptual and categorical similarity on object representations. Even though the studies used identical stimuli, the results were different between the two neuroimaging modalities: they found more evidence for categorical similarity with fMRI 68 and perceptual similarity with MEG 69 .…”
Section: The Temporal Dynamics In Neural Object Representationsmentioning
confidence: 86%