Information technology is used ubiquitously and has become an integral part of everyday life. With the ever increasing pervasiveness and persuasiveness of Artificial Intelligence (AI), the function of socio-technical systems changes and must be considered as playing a more active role. Technology, e.g., in the form of large language models accessed through a chat interface, is now perceived as a social actor rather than as a passive instrument. Therefore, the question of how and when trust in technology and its organisational controllers is well placed is gaining relevance. In this article, we argue that simplistic views of trust that do not reflect the active nature of AI systems have to be replaced with more elaborate models. Regulation alone does not cover the complex relation between human user, AI system, creator, and auditor. We argue that a radical paradigm shift is urgently needed. The current debate that focuses the question of trust on explainable and ethical AI is dangerously misguided. Technology provides the opportunity for some organisations to leverage established prosocial trust relationships and repurpose them for their own narrow interests. The new model suggests an interpretation of socio-technical systems inspired by many-body physics, structuring interactions in a socio-technical system into fields and agents. This naturally explains the perceived agency of AI systems, and leads to actionable recommendations on how the discourse about trust can be reframed.