Distributional semantic models (DSMs) specify learning mechanisms with which humans construct a deep representation of word meaning from statistical regularities in language. Despite their remarkable success at fitting human semantic data, virtually all DSMs may be classified as prototype models in that they try to construct a single representation for a word's meaning aggregated across contexts. This prototype representation conflates multiple meanings and senses of words into a center of tendency, often losing the subordinate senses of a word in favor of more frequent ones. We present an alternative instance-based DSM based on the classic MINERVA 2 multiple-trace model of episodic memory. The model stores a representation of each language instance in a corpus, and a word's meaning is constructed on-the-fly when presented with a retrieval cue. Across two experiments with homonyms in both an artificial and natural language corpus, we show how the instance-based model can naturally account for the subordinate meanings of words in appropriate context due to nonlinear activation over stored instances, but classic prototype DSMs cannot. The instance-based account suggests that meaning may not be something that is created during learning or stored per se, but may rather be an artifact of retrieval from an episodic memory store.