The nature of semantic memory and the role of the two cerebral hemispheres in meaning processing were examined using event-related brain potentials (ERPs) elicited by pictures in sentences. Participants read sentence pairs ending with the lateralized presentation of three target types: (1) expected pictures, (2) unexpected pictures from the expected semantic category, and (3) unexpected pictures from an unexpected category. ERPs to contextually unexpected pictures were more negative 350-500 ms (larger N400s) than those to expected pictures in both visual fields. However, while N400s to the two types of unexpected items did not differ with left visual field presentations, they were smaller to the unexpected items from the expected category with right visual field presentations. This pattern, previously observed to words [Brain Language 62 (1998) 149], suggests general differences in how the two hemispheres use context on-line. Other aspects of the N400 response-and effects on earlier ERP components-reveal differences between pictures and words, suggesting that semantic memory access is not modality-independent. The P2 component varied with ending type for right but not left visual field presentations, suggesting that the left hemisphere may use contextual information to prepare for the visual analysis of upcoming stimuli. Furthermore, there was clear evidence for an earlier negativity ("N300"), which varied with ending type but, unlike the N400, was unaffected by visual field of presentation. Overall, the results support our hypothesis that the left hemisphere actively uses top-down information to preactivate perceptual and semantic features of upcoming stimuli, while the right hemisphere adopts a "wait and see" integrative approach.