It is undeniable that conversational agents took the world by storm. Chatbots such as ChatGPT (Generative Pre Trained) are used for translations, financial advice, and even as therapists, by millions of users every month. When interacting with technology it’s important to be careful, especially if we do so by using natural language, since our relationship with artificial agents is shaped by the technology’s features and the manufacturer's goal. The paper, organized into three sections, explores the question of whether ChatGPT’s production can be described as ‘bullshit’. In the first section, the focus is on ChatGPT’s architecture and development; in the second a new formulation of the concept of Frankfurt’s ‘bullshit’ is presented, in which its central features of indifference, deception and manipulation are highlighted; in the last section, the title question is tackled, proposing an affirmative answer to it, arguing that ChatGPT can be considered a ‘bullshit’ generator.