Extant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene-sentence mismatch based on actions versus thematic role relations, e.g., Altmann & Kamide, 2007;Knoeferle & Crocker, 2007;Taylor & Zwaan, 2008;Zwaan & Radvansky, 1998).To provide additional data for theory testing and development, we collected event-related brain potentials (ERPs) as participants read a subject-verbobject sentence (500 ms SOA in Experiment 1 and 300 ms SOA in Experiment 2), and post-sentence verification times indicating whether or not the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 verb and/or the thematic role relations matched a preceding picture (depicting two participants engaged in an action). Though incrementally processed, these two types of mismatch yielded different ERP effects. Role-relation mismatch effects emerged at the subject noun as anterior negativities to the mismatching noun, preceding action mismatch effects manifest as centro-parietal N400s greater to the mismatching verb, regardless of SOAs. These two types of mismatch manipulations also yielded different effects post-verbally, correlated differently with a participant's mean accuracy, verbal working memory and visual-spatial scores, and differed in their interactions with SOA. Taken together these results clearly implicate more than a single mismatch mechanism for extant accounts of picture-sentence processing to accommodate.