Misinformation makes democratic governance harder, especially in developing countries. Despite its real-world import, little is known about how to combat misinformation outside of the United States, particularly in places with low education, accelerating Internet access, and encrypted information sharing. This study uses a field experiment in India to test the efficacy of a pedagogical intervention on respondents’ ability to identify misinformation during the 2019 elections (N = 1,224). Treated respondents received hour-long in-person media literacy training in which enumerators discussed inoculation strategies, corrections, and the importance of verifying misinformation, all in a coherent learning module. Receiving this hour-long media literacy intervention did not significantly increase respondents’ ability to identify misinformation on average. However, treated respondents who support the ruling party became significantly less able to identify pro-attitudinal stories. These findings point to the resilience of misinformation in India and the presence of motivated reasoning in a traditionally nonideological party system.
Misinformation makes democratic governance harder, especially in developing countries. Despite its real-world import, little is known about how to combat fake news outside of the U.S., particularly in places with low education, accelerating Internet access, and encrypted information sharing. This study uses a field experiment in India to test the efficacy of a pedagogical intervention on respondents' ability to identify fake news during the 2019 elections (N=1224). Treated respondents received in-person media literacy training in which enumerators demonstrated two tools to identify fake news: reverse image searching and navigating a fact-checking website.Receiving this hour-long media literacy intervention did not significantly increase respondents' ability to identify fake news on average. However, treated respondents who support the ruling party became significantly less able to identify pro-attitudinal fake news. These findings point to the resilience of misinformation in India and the presence of motivated reasoning in a traditionally non-ideological party system.
Fact-checks and corrections of falsehoods have emerged as effective ways to counter misinformation online. But in contexts with encrypted messaging applications (EMAs), corrections must necessarily emanate from peers. Are such social corrections effective? If so, how substantiated do corrective messages need to be? To answer these questions, we evaluate the effect of different types of social corrections on the persistence of misinformation in India ([Formula: see text]5,100). Using an online experiment, we show that social corrections substantially reduce beliefs in misinformation, including in beliefs deeply anchored in salient group identities. Importantly, these positive effects are not systematically attenuated by partisan motivated reasoning, highlighting a striking difference from Western contexts. We also find that the presence of a correction matters more relative to how sophisticated this correction is: substantiating a correction with a source only improves its effect in a minority of cases; besides, when social corrections are effective, citing a source does not drastically improve the size of their effect. These results have implications for both users and platforms and speak to countering misinformation in developing countries that rely on private messaging apps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.