The bulk of support for predictive coding models has come from the models' ability to simulate known perceptual or neuronal phenomena, but there have been fewer attempts to identify a reliable neural signature of predictive coding. Here we propose that the N300 component of the event-related potential (ERP), occurring 250-350 ms post-stimulus-onset, may be such a signature of perceptual hypothesis testing operating at the scale of whole objects and scenes. We show that N300 amplitudes are smaller to representative ('good exemplars') compared to less representative ('bad exemplars') items from natural scene categories. Integrating these results with patterns observed for objects, we establish that, across a variety of visual stimuli, the N300 is responsive to statistical regularity, or the degree to which the input is 'expected' (either explicitly or implicitly) by the system based on prior knowledge, with statistically regular images, which entail reduced prediction error, evoking a reduced response. Moreover, we show that the measure exhibits context-dependency; that is, we find the N300 sensitivity to category representativeness only when stimuli are congruent with and not when they are incongruent with a category pre-cue, suggesting that the component may reflect the ease with which an image matches the current hypothesis generated by the visual system. Thus, we argue that the N300 ERP component is the best candidate to date for an index of perceptual hypotheses testing, whereby incoming sensory information for complex visual objects and scenes is accessed against contextual predictions generated in mid-level visual areas.