Recent in vivo data show ensemble activity in medial entorhinal neurons that demonstrates ‘look-ahead’ activity, decoding spatially to reward locations ahead of a rat deliberating at a choice point while performing a cued, appetitive T-Maze task. To model this experiment's look-ahead results, we adapted previous work that produced a model where scans along equally probable directions activated place cells, associated reward cells, grid cells, and persistent spiking cells along those trajectories. Such look-ahead activity may be a function of animals performing scans to reduce ambiguity while making decisions. In our updated model, look-ahead scans at the choice point can activate goal-associated reward and place cells, which indicate the direction the virtual rat should turn at the choice point. Hebbian associations between stimulus and reward cell layers are learned during training trials, and the reward and place layers are then used during testing to retrieve goal-associated cells based on cue presentation. This system creates representations of location and associated reward information based on only two inputs of heading and speed information which activate grid cell and place cell layers. We present spatial and temporal decoding of grid cell ensembles as rats are tested with perfect and imperfect stimuli. Here, the virtual rat reliably learns goal locations through training sessions and performs both biased and unbiased look-ahead scans at the choice point. Spatial and temporal decoding of simulated medial entorhinal activity indicates that ensembles are representing forward reward locations when the animal deliberates at the choice point, emulating in vivo results.