A huge amount of potentially dangerous COVID-19 misinformation is appearing online. Here we use machine learning to quantify COVID-19 content among online opponents of establishment health guidance, in particular vaccinations (''anti-vax''). We find that the anti-vax community is developing a less focused debate around COVID-19 than its counterpart, the pro-vaccination (''pro-vax'') community. However, the anti-vax community exhibits a broader range of ''flavors'' of COVID-19 topics, and hence can appeal to a broader cross-section of individuals seeking COVID-19 guidance online, e.g. individuals wary of a mandatory fast-tracked COVID-19 vaccine or those seeking alternative remedies. Hence the anti-vax community looks better positioned to attract fresh support going forward than the pro-vax community. This is concerning since a widespread lack of adoption of a COVID-19 vaccine will mean the world falls short of providing herd immunity, leaving countries open to future COVID-19 resurgences. We provide a mechanistic model that interprets these results and could help in assessing the likely efficacy of intervention strategies. Our approach is scalable and hence tackles the urgent problem facing social media platforms of having to analyze huge volumes of online health misinformation and disinformation. INDEX TERMS COVID-19, machine learning, topic modeling, mechanistic model, social computing.
We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.
We reveal hidden social media machinery that has allowed misinformation to thrive among mainstream users, but which is missing from current policy discussions. Specifically, we show how mainstream parenting communities on Facebook have been subject to a powerful, two-pronged misinformation machinery during the pandemic, that has pulled them closer to extreme communities and their misinformation. The first prong involves a strengthening of the bond between mainstream parenting communities and pre-Covid conspiracy theory communities that promote misinformation about climate change, fluoride, chemtrails and 5G. Alternative health communities have acted as the critical conduits. The second prong features an adjacent core of tightly bonded, yet largely under-the-radar, anti-vaccination communities that continually supplied Covid-19 and vaccine misinformation to the mainstream parenting communities. Our findings show why Facebook's own efforts to post reliable information about vaccines and Covid-19 have not been efficient; why targeting the largest communities does not work; and how this machinery could generate new pieces of misinformation perpetually. We provide a simple yet exactly solvable mathematical theory for the system's dynamics. It predicts a new strategy for controlling mainstream community tipping points. Our conclusions should be applicable to any social media platform with in-built community features, and open up a new engineering approach to addressing online misinformation and other harms at scale.
We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.
Online hate speech is a critical and worsening problem, with extremists using social media platforms to radicalize recruits and coordinate offline violent events. While much progress has been made in analyzing online hate speech, no study to date has classified multiple types of hate speech across both mainstream and fringe platforms. We conduct a supervised machine learning analysis of 7 types of online hate speech on 6 interconnected online platforms. We find that offline trigger events, such as protests and elections, are often followed by increases in types of online hate speech that bear seemingly little connection to the underlying event. This occurs on both mainstream and fringe platforms, despite moderation efforts, raising new research questions about the relationship between offline events and online speech, as well as implications for online content moderation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.