9Biological vision relies on representations of the physical world at different levels of complexity.
10Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-11 based attributes, as shape and category. However, how these features are integrated into coherent 12 percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli 13 from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we 14 revealed the temporal dynamics of feature processing in human subjects attending to pictures of items 15 pertaining to different semantic categories. By employing Relative Weights Analysis, we mitigated 16 collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast 17 and spatial frequencies), shape (medial-axis) and category are represented within the same spatial 18 locations early in time: 100-150ms after stimulus onset. This fast and overlapping processing may result 19 from independent parallel computations, with categorical representation emerging later than the onset 20 of low-level feature processing, yet before shape coding. Categorical information is represented both 21 before and after shape also suggesting a role for this feature in the refinement of categorical matching.
37Commons CC0 license.
39Actually, each feature of Figures 1B-D is processed across the whole visual system. The primary 40 visual cortex provides an optimal encoding of natural image statistics based on local contrast, 41 orientation and spatial frequencies 2,3 , and these low-level features significantly correlate with brain 42 activity in higher-level visual areas 4,5 . Nonetheless, occipital, temporal and parietal modules also process 43 object shape 6-8 and categorical knowledge 9-11 .
44Although all these features are relevant to our brain, their relative contribution in producing 45 discrete and coherent percepts has not yet been clarified. In general, these different dimensions are 46 interrelated and share common biases (i.e., are collinear), thus limiting the capability to disentangle 47 their specific role 12 . For instance, categorical discriminations can be driven either by object shape (e.g.,
48tools have peculiar outlines) or spatial frequencies (e.g., faces and places have specific spectral 49 signatures: 13 ). Consequently, object shape and category are processed by the same regions across the 50 visual cortex, even when using a balanced set of stimuli 14 . Even so, the combination of multiple feature-51 based models appears to describe underlying object representations at a neural level better than the 52 same models tested in isolation. For instance, a magnetoencephalography (MEG) study found that 53 combining low-level and semantic features improves the prediction accuracy of brain responses to 54 viewed objects, thus suggesting that semantic information integrates with visual features during the 55 temporal formation of object representations 15 .
56Here, we...