The Networked Society has brought about opportunities, such as citizens’ journalism, as well as challenges, such as the proliferation of media distortions. To keep up which such a sheer amount of (mis)information, citizens need to develop critical media literacy. We believe that, even though not enough to guarantee a gatekeeping process, human-computer interaction can help users develop epistemic vigilance. To this sake, we present the Fake News Immunity chatbot, designed to teach users how to recognize misinformation leveraging Fallacy Theory. Fallacies, arguments which seem valid but are not, constitute privileged viewpoints for the identification of misinformation. We then evaluate the results of the chatbot as an educational tool through a gamification experience with two cohorts of students and discuss achieved learning outcomes as well as recommendations for future improvement.