Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails re-instantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data 2 that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences.