Large language models can handle sophisticated natural language processing tasks. This raises the question of how their understanding of semantic meaning compares to that of human beings. Supporters of embodied cognition often point out that because these models are trained solely on text, their representations of semantic content are not grounded in sensorimotor experience. This paper contends that human cognition exhibits capabilities that fit with both the embodied and artificial intelligence approaches. Evidence suggests that semantic memory is partially grounded in sensorimotor systems and dependent on language-specific learning. From this perspective, large language models demonstrate the richness of language as a source of semantic information. They show how our experience with language might scaffold and extend our capacity to make sense of the world. In the context of an embodied mind, language provides access to a valuable form of ungrounded cognition.
This article is part of the theme issue ‘Minds in movement: embodied cognition in the age of artificial intelligence’.