Social media have changed the communication practices by creating an acute need for continuous interaction. The use of social chatbots is growing as an effective way to communicate with publics. Bots have become social actors and then, someone must account for their actions. Since responsibility is bounded to agency and rationality, it cannot be directly attributed to bots. Who should be held responsible for non-human beings' actions, particularly when the consequences of these actions are negative? We address this controversy from both theoretical and empirical perspectives. Firstly, we discuss the adequacy of the notions of moral responsibility and accountability regarding non-human artificial agents, as they are ruled by complex, intentionally opaque and unpredictable interactions and processes. We do it from the two approaches currently predominant: context-dependent and structuralist. Secondly, we draw on the assumption that the failure of a computer system is an opportunity to gain knowledge about the interested powers behind its design and functioning. Then, taking the concept of media frame as an implicit way of spotting the agent of the story, we perform an exploratory analysis on how responsibility was attributed by the media in the paradigmatic case of the transformation of Tay, a chatbot launched by Microsoft in 2016, turned into a racist, Nazi and homophobic hate speaker. Our results illustrate the difficulties media experienced in consistently attributing the responsibility for the chatbots' malfunction. Results show the discourse is, in general, simplistic, non-critical and misleading, and tends to depict a reality that favors business's interests. We conclude that, while all the actors interacting with the chatbot share the responsibility of its actions, it is only Microsoft who must account for these actions, both retrospectively and prospectively.