The rapid spread of conversational AI, as well as the potential for personal conversations with chatbots, makes it relevant to examine what norms and values underlie chatbot responses. This article examines the feeling rules for anger implicitly communicated by a recent chatbot (ChatGPT). Querying the chatbot about appropriate and inappropriate anger, the study shows how specific feeling rules are articulated by AI. The chatbot communicates norms of productive, respectful, constructive, controlled and calm expression of anger through talk and, as such, relies on communication as a pervasive cultural repertoire. Based on a rereading of Boltanski and Thévenot’s (2006) economies of worth focusing on feeling rules, it is argued that different moral repertoires have implications for feeling rules. Using this theoretical framework to analyse the responses of the chatbot, it is evident that it primarily relies on both the industrial and the domestic orders of worth to assess anger. The chatbot articulates the problem of anger as unproductiveness and disrespect. The feeling rules implied in the responses of the chatbot reflect a neoliberal conception of self as individually responsible, productive, self-regulating, emotionally competent and able to find solutions. The seemingly neutral advice of the chatbot potentially depoliticises anger, disciplines people to remain productive and respectful and narrows the scope of anger expressions that are deemed acceptable.