High-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. What object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical properties by presenting perceptually matched objects that were easily recognizable as being animate or inanimate (e.g., snake and rope). In a series of behavioral experiments, three aspects of perceptual dissimilarity of these objects were quantified: overall dissimilarity, outline dissimilarity, and texture dissimilarity. Neural dissimilarity of MEG sensor patterns, collected in male and female human participants, was modeled using regression analysis, in which perceptual dissimilarity (taken from the behavioral experiments) and categorical dissimilarity served as predictors of neural dissimilarity. We found that perceptual dissimilarity was strongly reflected in MEG sensor patterns from 80 ms after stimulus onset, with separable contributions of outline and texture dissimilarity. Surprisingly, MEG patterns did not distinguish between animate and inanimate objects after controlling for perceptual dissimilarity. Nearly identical results were found in a second MEG experiment that required object recognition. These results indicate that MEG sensor patterns do not capture object animacy independently of perceptual differences between animate and inanimate objects. This is in contrast to results observed in fMRI using the same stimuli, task, and analysis approach, with fMRI showing a highly reliable categorical distinction in ventral temporal cortex even when controlling for perceptual dissimilarity. The discrepancy between MEG and fMRI precludes the straightforward integration of these imaging modalities.
Significance statementRecent studies have shown that multivariate analysis of MEG sensor patterns allows for a detailed characterization of the time course of visual object processing, demonstrating that the neural representational space starts to become organized by object category (e.g., separating animate and inanimate objects) from around 150 ms after stimulus onset. It is unclear, however, whether this organization truly reflects a categorical distinction or whether it reflects uncontrolled differences in perceptual similarity (e.g., most animals have four legs). Here we find that MEG sensor patterns no longer distinguish between animate and inanimate objects when controlling for perceptual differences between objects (e.g., when comparing snake and rope). These results indicate that MEG sensor patterns are primarily sensitive to visual object properties.