Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, hand-picked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but hand-picking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use word embeddings, a well-established method for creating distributed feature representations, for lexical features and retrieval cues. We show that the similarity between the features and the cues (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons. In addition to this methodological contribution, our results suggest that, contrary to Cunnings and Sturts’ original conclusions, focused words might be more prominent in memory, making them less susceptible to interference, as predicted by a recent extension to ACT-R (Engelmann, Jäger, & Vasishth, 2019).