“…For example, listeners are more likely to direct their eye-gaze to a picture of an edible object (e.g., a cake) when they hear the beginning of an utterance like 'The boy will eat…' compared to a neutral utterance such as 'The boy will move…' (Altmann & Kamide, 1999). Further, much evidence has suggested that comprehenders compute rich expectations about upcoming inputs at multiple levels of representation (syntactic: Ilkin & Sturt, 2011;Lau, Stroud, Plesch, & Phillips, 2006;Levy, Fedorenko, Breen, & Gibson, 2012;Omaki et al, 2015;Staub & Clifton, 2006;Wicha et al, 2004;Van Berkum et al, 2005;Yoshida, Dickey, & Sturt, 2013;lexico-semantic: Federmeier & Kutas, 1999;Kutas & Hillyard, 1984;Otten & Van Berkum, 2008;Szewczyk & Schriefers, 2013; phonological and orthographic: Delong et al, 2005;Dikker, Rabagliati, Farmer, & Pylkkanen, 2010;Dikker, Rabagliati, & Pylkkänen, 2009;Farmer, Yan, Bicknell, & Tanenhaus, 2015;Kim & Lai, 2012;Laszlo & Federmeier, 2009). Here, we operationally define 'prediction' as the pre-activation of stored representations before the bottom-up input is encountered, and we will make no a priori assumptions regarding the nature of the mechanisms involved (e.g., whether they are automatic or controlled).…”