The distributional hypothesis (that the meaning of a word corresponds to the contexts where it occurs) struggles to explain how people represent low-frequency words. When distributional information is lacking, people utilize phonological cues, but those cues indicate broad categories, not word meanings per se. We conducted two preregistered experiments to test the hypothesis that people recruit similar-sounding words to represent and access the meanings of low-frequency words. In Experiment 1, native English speakers made semantic relatedness decisions about a cue word (e.g., dodge) followed by a non-arbitrary target (evade) that overlaps in form and meaning with an attractor word (avoid, which is semantically related to dodge) or by an arbitrary control (elude) that is matched with the non-arbitrary target on formal and distributional similarity to the cue word. As we predicted, participants decided faster and more often that non-arbitrary targets, compared to controls, were semantically related to cues. In Experiment 2, participants made semantic relatedness decisions about sentences containing the same cue and target words (e.g., The kids dodged something and She tried to evade/elude the officer). We used MouseView.js to blur the sentences and create a fovea-like aperture directed by the participant’s cursor, allowing us to measure fixation duration. While we did not observe the predicted difference between non-arbitrary and arbitrary targets themselves, we found a lag effect, with shorter fixations on words following non-arbitrary targets, indicating an advantage in accessing the meanings of non-arbitrary words. These experiments provide evidence that similar-sounding words bolster mental representations of low-frequency words.