The semantic memory literature has recently seen the emergence of predictive neural network models that use principles of reinforcement learning to create a "neural embedding" of word meaning when trained on a language corpus. These models have taken the field by storm, partially due to the resurgence of connectionist architectures, but also due to their remarkable success at fitting human data. However, predictive embedding models also inherit the weaknesses of their ancestors. In this paper, we explore the effect of catastrophic interference (CI), long known to be a flaw with neural network models, on a modern neural embedding model of semantic representation (word2vec). We use homonyms as an index of bias as a function of the order in which a corpus is learned. If the corpus is learned in random order, the final representation will tend towards the dominant sense of the word (bank à money) as opposed to the subordinate sense (bank à river). However, if the subordinate sense is presented to the network after learning the