Containing misinformation spread on social media has been acknowledged as a great socio-technical challenge in the last years.Despite advances, practical and timely solutions to properly communicate verified (mis)information to social media users are an evidenced need. We introduce a multi-agent approach to bridge Twitter users with fact-checked information. First, a social bot, which nudges users sharing verified misinformation, and a conversational agent that verifies if there is a reputable fact-check available and explains existing assessments in natural language. Both agents share the same requirements of evoking trust and being perceived by Twitter users as an opportunity to build their media literacy. To this end, two preliminary human-centred studies are presented, the first one looking for an adequate identity for the bot, and the second for understanding preferences for credibility indicators when explaining the assessment of misinformation. The results indicate what this design research should pursue to create agents that are consistent in their presentation, friendly, engaging, and credible.CCS Concepts: • Human-centered computing → Natural language interfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.