Disinformation on social media—commonly called “fake news”—has become a major concern around the world, and many fact-checking initiatives have been launched in response. However, if the presentation format of fact-checked results is not persuasive, fact-checking may not be effective. For instance, Facebook tested the idea of flagging dubious articles in 2017 but concluded that it was ineffective and removed the feature. We conducted three experiments with social media users to investigate two different approaches to implementing a fake news flag—one designed to be most effective when processed by automatic cognition (System 1) and the other designed to be most effective when processed by deliberate cognition (System 2). Both interventions were effective, and an intervention that combined both approaches was about twice as effective. The awareness training on the meaning of the flags increased the effectiveness of the System 2 intervention but not the System 1 intervention. Believability influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). Our results suggest that both theoretical routes can be used—separately or together—in the presentation of fact-checking results in order to reduce the influence of fake news on social media users.
AbstractObjectiveThe objective was to understand how people respond to COVID-19 screening chatbots.Materials and MethodsWe conducted an online experiment with 371 participants who viewed a COVID-19 screening session between a hotline agent (chatbot or human) and a user with mild or severe symptoms.ResultsThe primary factor driving user response to screening hotlines (human or chatbot) is perceptions of the agent’s ability. When ability is the same, users view chatbots no differently or more positively than human agents. The primary factor driving perceptions of ability is the user’s trust in the hotline provider, with a slight negative bias against chatbots’ ability. Asians perceived higher ability and benevolence than Whites.ConclusionEnsuring that COVID-19 screening chatbots provide high quality service is critical, but not sufficient for widespread adoption. The key is to emphasize the chatbot’s ability and assure users that it delivers the same quality as human agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.