This paper introduces a theory of mind that positions language as a cognitive tool in its own right for the optimization of biological fitness. I argue that human language reconstruction of reality results from biological memory and adaptation to uncertain environmental conditions for the reaffirmation of the Self-as-symbol. I demonstrate that pretrained language models, such as ChatGPT, lack embodied grounding, which compromises their ability to adequately model the world through language due to the absence of subjecthood and conscious states for event recognition and partition. At a deep level, I challenge the notion that the constitution of a semiotic Self relies on computational reflection, arguing against reducing human representation to data structures and emphasizing the importance of positing accurate models of human representation through language. This underscores the distinction between transformers as posthuman agents and humans as purposeful biological agents, which emphasizes the human capacity for purposeful biological adjustment and optimization. One of the main conclusions of this is that the capacity to integrate information does not amount to phenomenal consciousness as argued by Information Integration Theory. Moreover, while language models exhibit superior computational capacity, they lack the real consciousness providing them with multiscalar experience anchored in the physical world, a characteristic of human cognition. However, the paper anticipates the emergence of new in silico conceptualizers capable of defining themselves as phenomenal agents with symbolic contours and specific goals.